net: dsa: Use conduit and user terms

Use more inclusive terms throughout the DSA subsystem by moving away
from "master" which is replaced by "conduit" and "slave" which is
replaced by "user". No functional changes.

Acked-by: Rob Herring <robh@kernel.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com>
Link: https://lore.kernel.org/r/20231023181729.1191071-2-florian.fainelli@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Florian Fainelli 2023-10-23 11:17:28 -07:00 committed by Jakub Kicinski
parent 00e984cb98
commit 6ca80638b9
74 changed files with 1556 additions and 1553 deletions

View File

@ -60,7 +60,7 @@ description: |
Check out example 6. Check out example 6.
- Port 5 can be wired to an external phy. Port 5 becomes a DSA slave. - Port 5 can be wired to an external phy. Port 5 becomes a DSA user port.
For the multi-chip module MT7530, the external phy must be wired TX to TX For the multi-chip module MT7530, the external phy must be wired TX to TX
to gmac1 of the SoC for this to work. Ubiquiti EdgeRouter X SFP is wired to gmac1 of the SoC for this to work. Ubiquiti EdgeRouter X SFP is wired

View File

@ -52,7 +52,7 @@ VLAN programming would basically change the CPU port's default PVID and make
it untagged, undesirable. it untagged, undesirable.
In difference to the configuration described in :ref:`dsa-vlan-configuration` In difference to the configuration described in :ref:`dsa-vlan-configuration`
the default VLAN 1 has to be removed from the slave interface configuration in the default VLAN 1 has to be removed from the user interface configuration in
single port and gateway configuration, while there is no need to add an extra single port and gateway configuration, while there is no need to add an extra
VLAN configuration in the bridge showcase. VLAN configuration in the bridge showcase.
@ -68,13 +68,13 @@ By default packages are tagged with vid 1:
ip link add link eth0 name eth0.2 type vlan id 2 ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3 ip link add link eth0 name eth0.3 type vlan id 3
# The master interface needs to be brought up before the slave ports. # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
ip link set eth0.2 up ip link set eth0.2 up
ip link set eth0.3 up ip link set eth0.3 up
# bring up the slave interfaces # bring up the user interfaces
ip link set wan up ip link set wan up
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
@ -113,11 +113,11 @@ bridge
# tag traffic on CPU port # tag traffic on CPU port
ip link add link eth0 name eth0.1 type vlan id 1 ip link add link eth0 name eth0.1 type vlan id 1
# The master interface needs to be brought up before the slave ports. # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
# bring up the slave interfaces # bring up the user interfaces
ip link set wan up ip link set wan up
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
@ -149,12 +149,12 @@ gateway
ip link add link eth0 name eth0.1 type vlan id 1 ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2 ip link add link eth0 name eth0.2 type vlan id 2
# The master interface needs to be brought up before the slave ports. # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
ip link set eth0.2 up ip link set eth0.2 up
# bring up the slave interfaces # bring up the user interfaces
ip link set wan up ip link set wan up
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up

View File

@ -67,7 +67,7 @@ MDIO indirect accesses
---------------------- ----------------------
Due to a limitation in how Broadcom switches have been designed, external Due to a limitation in how Broadcom switches have been designed, external
Broadcom switches connected to a SF2 require the use of the DSA slave MDIO bus Broadcom switches connected to a SF2 require the use of the DSA user MDIO bus
in order to properly configure them. By default, the SF2 pseudo-PHY address, and in order to properly configure them. By default, the SF2 pseudo-PHY address, and
an external switch pseudo-PHY address will both be snooping for incoming MDIO an external switch pseudo-PHY address will both be snooping for incoming MDIO
transactions, since they are at the same address (30), resulting in some kind of transactions, since they are at the same address (30), resulting in some kind of

View File

@ -31,38 +31,38 @@ at https://www.kernel.org/pub/linux/utils/net/iproute2/
Through DSA every port of a switch is handled like a normal linux Ethernet Through DSA every port of a switch is handled like a normal linux Ethernet
interface. The CPU port is the switch port connected to an Ethernet MAC chip. interface. The CPU port is the switch port connected to an Ethernet MAC chip.
The corresponding linux Ethernet interface is called the master interface. The corresponding linux Ethernet interface is called the conduit interface.
All other corresponding linux interfaces are called slave interfaces. All other corresponding linux interfaces are called user interfaces.
The slave interfaces depend on the master interface being up in order for them The user interfaces depend on the conduit interface being up in order for them
to send or receive traffic. Prior to kernel v5.12, the state of the master to send or receive traffic. Prior to kernel v5.12, the state of the conduit
interface had to be managed explicitly by the user. Starting with kernel v5.12, interface had to be managed explicitly by the user. Starting with kernel v5.12,
the behavior is as follows: the behavior is as follows:
- when a DSA slave interface is brought up, the master interface is - when a DSA user interface is brought up, the conduit interface is
automatically brought up. automatically brought up.
- when the master interface is brought down, all DSA slave interfaces are - when the conduit interface is brought down, all DSA user interfaces are
automatically brought down. automatically brought down.
In this documentation the following Ethernet interfaces are used: In this documentation the following Ethernet interfaces are used:
*eth0* *eth0*
the master interface the conduit interface
*eth1* *eth1*
another master interface another conduit interface
*lan1* *lan1*
a slave interface a user interface
*lan2* *lan2*
another slave interface another user interface
*lan3* *lan3*
a third slave interface a third user interface
*wan* *wan*
A slave interface dedicated for upstream traffic A user interface dedicated for upstream traffic
Further Ethernet interfaces can be configured similar. Further Ethernet interfaces can be configured similar.
The configured IPs and networks are: The configured IPs and networks are:
@ -96,11 +96,11 @@ without using a VLAN based configuration.
ip addr add 192.0.2.5/30 dev lan2 ip addr add 192.0.2.5/30 dev lan2
ip addr add 192.0.2.9/30 dev lan3 ip addr add 192.0.2.9/30 dev lan3
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
# bring up the slave interfaces # bring up the user interfaces
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
ip link set lan3 up ip link set lan3 up
@ -108,11 +108,11 @@ without using a VLAN based configuration.
*bridge* *bridge*
.. code-block:: sh .. code-block:: sh
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
# bring up the slave interfaces # bring up the user interfaces
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
ip link set lan3 up ip link set lan3 up
@ -134,11 +134,11 @@ without using a VLAN based configuration.
*gateway* *gateway*
.. code-block:: sh .. code-block:: sh
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
# bring up the slave interfaces # bring up the user interfaces
ip link set wan up ip link set wan up
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
@ -178,14 +178,14 @@ configuration.
ip link add link eth0 name eth0.2 type vlan id 2 ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3 ip link add link eth0 name eth0.3 type vlan id 3
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
ip link set eth0.2 up ip link set eth0.2 up
ip link set eth0.3 up ip link set eth0.3 up
# bring up the slave interfaces # bring up the user interfaces
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
ip link set lan3 up ip link set lan3 up
@ -221,12 +221,12 @@ configuration.
# tag traffic on CPU port # tag traffic on CPU port
ip link add link eth0 name eth0.1 type vlan id 1 ip link add link eth0 name eth0.1 type vlan id 1
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
# bring up the slave interfaces # bring up the user interfaces
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
ip link set lan3 up ip link set lan3 up
@ -261,13 +261,13 @@ configuration.
ip link add link eth0 name eth0.1 type vlan id 1 ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2 ip link add link eth0 name eth0.2 type vlan id 2
# For kernels earlier than v5.12, the master interface needs to be # For kernels earlier than v5.12, the conduit interface needs to be
# brought up manually before the slave ports. # brought up manually before the user ports.
ip link set eth0 up ip link set eth0 up
ip link set eth0.1 up ip link set eth0.1 up
ip link set eth0.2 up ip link set eth0.2 up
# bring up the slave interfaces # bring up the user interfaces
ip link set wan up ip link set wan up
ip link set lan1 up ip link set lan1 up
ip link set lan2 up ip link set lan2 up
@ -380,22 +380,22 @@ affinities according to the available CPU ports.
Secondly, it is possible to perform load balancing between CPU ports on a per Secondly, it is possible to perform load balancing between CPU ports on a per
packet basis, rather than statically assigning user ports to CPU ports. packet basis, rather than statically assigning user ports to CPU ports.
This can be achieved by placing the DSA masters under a LAG interface (bonding This can be achieved by placing the DSA conduits under a LAG interface (bonding
or team). DSA monitors this operation and creates a mirror of this software LAG or team). DSA monitors this operation and creates a mirror of this software LAG
on the CPU ports facing the physical DSA masters that constitute the LAG slave on the CPU ports facing the physical DSA conduits that constitute the LAG slave
devices. devices.
To make use of multiple CPU ports, the firmware (device tree) description of To make use of multiple CPU ports, the firmware (device tree) description of
the switch must mark all the links between CPU ports and their DSA masters the switch must mark all the links between CPU ports and their DSA conduits
using the ``ethernet`` reference/phandle. At startup, only a single CPU port using the ``ethernet`` reference/phandle. At startup, only a single CPU port
and DSA master will be used - the numerically first port from the firmware and DSA conduit will be used - the numerically first port from the firmware
description which has an ``ethernet`` property. It is up to the user to description which has an ``ethernet`` property. It is up to the user to
configure the system for the switch to use other masters. configure the system for the switch to use other conduits.
DSA uses the ``rtnl_link_ops`` mechanism (with a "dsa" ``kind``) to allow DSA uses the ``rtnl_link_ops`` mechanism (with a "dsa" ``kind``) to allow
changing the DSA master of a user port. The ``IFLA_DSA_MASTER`` u32 netlink changing the DSA conduit of a user port. The ``IFLA_DSA_MASTER`` u32 netlink
attribute contains the ifindex of the master device that handles each slave attribute contains the ifindex of the conduit device that handles each user
device. The DSA master must be a valid candidate based on firmware node device. The DSA conduit must be a valid candidate based on firmware node
information, or a LAG interface which contains only slaves which are valid information, or a LAG interface which contains only slaves which are valid
candidates. candidates.
@ -403,7 +403,7 @@ Using iproute2, the following manipulations are possible:
.. code-block:: sh .. code-block:: sh
# See the DSA master in current use # See the DSA conduit in current use
ip -d link show dev swp0 ip -d link show dev swp0
(...) (...)
dsa master eth0 dsa master eth0
@ -414,7 +414,7 @@ Using iproute2, the following manipulations are possible:
ip link set swp2 type dsa master eth1 ip link set swp2 type dsa master eth1
ip link set swp3 type dsa master eth0 ip link set swp3 type dsa master eth0
# CPU ports in LAG, using explicit assignment of the DSA master # CPU ports in LAG, using explicit assignment of the DSA conduit
ip link add bond0 type bond mode balance-xor && ip link set bond0 up ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth1 down && ip link set eth1 master bond0 ip link set eth1 down && ip link set eth1 master bond0
ip link set swp0 type dsa master bond0 ip link set swp0 type dsa master bond0
@ -426,7 +426,7 @@ Using iproute2, the following manipulations are possible:
(...) (...)
dsa master bond0 dsa master bond0
# CPU ports in LAG, relying on implicit migration of the DSA master # CPU ports in LAG, relying on implicit migration of the DSA conduit
ip link add bond0 type bond mode balance-xor && ip link set bond0 up ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth0 down && ip link set eth0 master bond0 ip link set eth0 down && ip link set eth0 master bond0
ip link set eth1 down && ip link set eth1 master bond0 ip link set eth1 down && ip link set eth1 master bond0
@ -436,23 +436,23 @@ Using iproute2, the following manipulations are possible:
Notice that in the case of CPU ports under a LAG, the use of the Notice that in the case of CPU ports under a LAG, the use of the
``IFLA_DSA_MASTER`` netlink attribute is not strictly needed, but rather, DSA ``IFLA_DSA_MASTER`` netlink attribute is not strictly needed, but rather, DSA
reacts to the ``IFLA_MASTER`` attribute change of its present master (``eth0``) reacts to the ``IFLA_MASTER`` attribute change of its present conduit (``eth0``)
and migrates all user ports to the new upper of ``eth0``, ``bond0``. Similarly, and migrates all user ports to the new upper of ``eth0``, ``bond0``. Similarly,
when ``bond0`` is destroyed using ``RTM_DELLINK``, DSA migrates the user ports when ``bond0`` is destroyed using ``RTM_DELLINK``, DSA migrates the user ports
that were assigned to this interface to the first physical DSA master which is that were assigned to this interface to the first physical DSA conduit which is
eligible, based on the firmware description (it effectively reverts to the eligible, based on the firmware description (it effectively reverts to the
startup configuration). startup configuration).
In a setup with more than 2 physical CPU ports, it is therefore possible to mix In a setup with more than 2 physical CPU ports, it is therefore possible to mix
static user to CPU port assignment with LAG between DSA masters. It is not static user to CPU port assignment with LAG between DSA conduits. It is not
possible to statically assign a user port towards a DSA master that has any possible to statically assign a user port towards a DSA conduit that has any
upper interfaces (this includes LAG devices - the master must always be the LAG upper interfaces (this includes LAG devices - the conduit must always be the LAG
in this case). in this case).
Live changing of the DSA master (and thus CPU port) affinity of a user port is Live changing of the DSA conduit (and thus CPU port) affinity of a user port is
permitted, in order to allow dynamic redistribution in response to traffic. permitted, in order to allow dynamic redistribution in response to traffic.
Physical DSA masters are allowed to join and leave at any time a LAG interface Physical DSA conduits are allowed to join and leave at any time a LAG interface
used as a DSA master; however, DSA will reject a LAG interface as a valid used as a DSA conduit; however, DSA will reject a LAG interface as a valid
candidate for being a DSA master unless it has at least one physical DSA master candidate for being a DSA conduit unless it has at least one physical DSA conduit
as a slave device. as a slave device.

View File

@ -25,7 +25,7 @@ presence of a management port connected to an Ethernet controller capable of
receiving Ethernet frames from the switch. This is a very common setup for all receiving Ethernet frames from the switch. This is a very common setup for all
kinds of Ethernet switches found in Small Home and Office products: routers, kinds of Ethernet switches found in Small Home and Office products: routers,
gateways, or even top-of-rack switches. This host Ethernet controller will gateways, or even top-of-rack switches. This host Ethernet controller will
be later referred to as "master" and "cpu" in DSA terminology and code. be later referred to as "conduit" and "cpu" in DSA terminology and code.
The D in DSA stands for Distributed, because the subsystem has been designed The D in DSA stands for Distributed, because the subsystem has been designed
with the ability to configure and manage cascaded switches on top of each other with the ability to configure and manage cascaded switches on top of each other
@ -35,7 +35,7 @@ of multiple switches connected to each other is called a "switch tree".
For each front-panel port, DSA creates specialized network devices which are For each front-panel port, DSA creates specialized network devices which are
used as controlling and data-flowing endpoints for use by the Linux networking used as controlling and data-flowing endpoints for use by the Linux networking
stack. These specialized network interfaces are referred to as "slave" network stack. These specialized network interfaces are referred to as "user" network
interfaces in DSA terminology and code. interfaces in DSA terminology and code.
The ideal case for using DSA is when an Ethernet switch supports a "switch tag" The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
@ -56,12 +56,16 @@ Note that DSA does not currently create network interfaces for the "cpu" and
- the "cpu" port is the Ethernet switch facing side of the management - the "cpu" port is the Ethernet switch facing side of the management
controller, and as such, would create a duplication of feature, since you controller, and as such, would create a duplication of feature, since you
would get two interfaces for the same conduit: master netdev, and "cpu" netdev would get two interfaces for the same conduit: conduit netdev, and "cpu" netdev
- the "dsa" port(s) are just conduits between two or more switches, and as such - the "dsa" port(s) are just conduits between two or more switches, and as such
cannot really be used as proper network interfaces either, only the cannot really be used as proper network interfaces either, only the
downstream, or the top-most upstream interface makes sense with that model downstream, or the top-most upstream interface makes sense with that model
NB: for the past 15 years, the DSA subsystem had been making use of the terms
"master" (rather than "conduit") and "slave" (rather than "user"). These terms
have been removed from the DSA codebase and phased out of the uAPI.
Switch tagging protocols Switch tagging protocols
------------------------ ------------------------
@ -80,14 +84,14 @@ methods of the ``struct dsa_device_ops`` structure, which are detailed below.
Tagging protocols generally fall in one of three categories: Tagging protocols generally fall in one of three categories:
1. The switch-specific frame header is located before the Ethernet header, 1. The switch-specific frame header is located before the Ethernet header,
shifting to the right (from the perspective of the DSA master's frame shifting to the right (from the perspective of the DSA conduit's frame
parser) the MAC DA, MAC SA, EtherType and the entire L2 payload. parser) the MAC DA, MAC SA, EtherType and the entire L2 payload.
2. The switch-specific frame header is located before the EtherType, keeping 2. The switch-specific frame header is located before the EtherType, keeping
the MAC DA and MAC SA in place from the DSA master's perspective, but the MAC DA and MAC SA in place from the DSA conduit's perspective, but
shifting the 'real' EtherType and L2 payload to the right. shifting the 'real' EtherType and L2 payload to the right.
3. The switch-specific frame header is located at the tail of the packet, 3. The switch-specific frame header is located at the tail of the packet,
keeping all frame headers in place and not altering the view of the packet keeping all frame headers in place and not altering the view of the packet
that the DSA master's frame parser has. that the DSA conduit's frame parser has.
A tagging protocol may tag all packets with switch tags of the same length, or A tagging protocol may tag all packets with switch tags of the same length, or
the tag length might vary (for example packets with PTP timestamps might the tag length might vary (for example packets with PTP timestamps might
@ -95,7 +99,7 @@ require an extended switch tag, or there might be one tag length on TX and a
different one on RX). Either way, the tagging protocol driver must populate the different one on RX). Either way, the tagging protocol driver must populate the
``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom`` ``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom``
with the length in octets of the longest switch frame header/trailer. The DSA with the length in octets of the longest switch frame header/trailer. The DSA
framework will automatically adjust the MTU of the master interface to framework will automatically adjust the MTU of the conduit interface to
accommodate for this extra size in order for DSA user ports to support the accommodate for this extra size in order for DSA user ports to support the
standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and
``needed_tailroom`` properties are also used to request from the network stack, ``needed_tailroom`` properties are also used to request from the network stack,
@ -140,18 +144,18 @@ adding or removing the ``ETH_P_EDSA`` EtherType and some padding octets).
It is possible to construct cascaded setups of DSA switches even if their It is possible to construct cascaded setups of DSA switches even if their
tagging protocols are not compatible with one another. In this case, there are tagging protocols are not compatible with one another. In this case, there are
no DSA links in this fabric, and each switch constitutes a disjoint DSA switch no DSA links in this fabric, and each switch constitutes a disjoint DSA switch
tree. The DSA links are viewed as simply a pair of a DSA master (the out-facing tree. The DSA links are viewed as simply a pair of a DSA conduit (the out-facing
port of the upstream DSA switch) and a CPU port (the in-facing port of the port of the upstream DSA switch) and a CPU port (the in-facing port of the
downstream DSA switch). downstream DSA switch).
The tagging protocol of the attached DSA switch tree can be viewed through the The tagging protocol of the attached DSA switch tree can be viewed through the
``dsa/tagging`` sysfs attribute of the DSA master:: ``dsa/tagging`` sysfs attribute of the DSA conduit::
cat /sys/class/net/eth0/dsa/tagging cat /sys/class/net/eth0/dsa/tagging
If the hardware and driver are capable, the tagging protocol of the DSA switch If the hardware and driver are capable, the tagging protocol of the DSA switch
tree can be changed at runtime. This is done by writing the new tagging tree can be changed at runtime. This is done by writing the new tagging
protocol name to the same sysfs device attribute as above (the DSA master and protocol name to the same sysfs device attribute as above (the DSA conduit and
all attached switch ports must be down while doing this). all attached switch ports must be down while doing this).
It is desirable that all tagging protocols are testable with the ``dsa_loop`` It is desirable that all tagging protocols are testable with the ``dsa_loop``
@ -159,7 +163,7 @@ mockup driver, which can be attached to any network interface. The goal is that
any network interface should be capable of transmitting the same packet in the any network interface should be capable of transmitting the same packet in the
same way, and the tagger should decode the same received packet in the same way same way, and the tagger should decode the same received packet in the same way
regardless of the driver used for the switch control path, and the driver used regardless of the driver used for the switch control path, and the driver used
for the DSA master. for the DSA conduit.
The transmission of a packet goes through the tagger's ``xmit`` function. The transmission of a packet goes through the tagger's ``xmit`` function.
The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
@ -183,44 +187,44 @@ virtual DSA user network interface corresponding to the physical front-facing
switch port that the packet was received on. switch port that the packet was received on.
Since tagging protocols in category 1 and 2 break software (and most often also Since tagging protocols in category 1 and 2 break software (and most often also
hardware) packet dissection on the DSA master, features such as RPS (Receive hardware) packet dissection on the DSA conduit, features such as RPS (Receive
Packet Steering) on the DSA master would be broken. The DSA framework deals Packet Steering) on the DSA conduit would be broken. The DSA framework deals
with this by hooking into the flow dissector and shifting the offset at which with this by hooking into the flow dissector and shifting the offset at which
the IP header is to be found in the tagged frame as seen by the DSA master. the IP header is to be found in the tagged frame as seen by the DSA conduit.
This behavior is automatic based on the ``overhead`` value of the tagging This behavior is automatic based on the ``overhead`` value of the tagging
protocol. If not all packets are of equal size, the tagger can implement the protocol. If not all packets are of equal size, the tagger can implement the
``flow_dissect`` method of the ``struct dsa_device_ops`` and override this ``flow_dissect`` method of the ``struct dsa_device_ops`` and override this
default behavior by specifying the correct offset incurred by each individual default behavior by specifying the correct offset incurred by each individual
RX packet. Tail taggers do not cause issues to the flow dissector. RX packet. Tail taggers do not cause issues to the flow dissector.
Checksum offload should work with category 1 and 2 taggers when the DSA master Checksum offload should work with category 1 and 2 taggers when the DSA conduit
driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and
csum_offset. For those cases, DSA will shift the checksum start and offset by csum_offset. For those cases, DSA will shift the checksum start and offset by
the tag size. If the DSA master driver still uses the legacy NETIF_F_IP_CSUM the tag size. If the DSA conduit driver still uses the legacy NETIF_F_IP_CSUM
or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the
offload hardware already expects that specific tag (perhaps due to matching offload hardware already expects that specific tag (perhaps due to matching
vendors). DSA slaves inherit those flags from the master port, and it is up to vendors). DSA user ports inherit those flags from the conduit, and it is up to
the driver to correctly fall back to software checksum when the IP header is not the driver to correctly fall back to software checksum when the IP header is not
where the hardware expects. If that check is ineffective, the packets might go where the hardware expects. If that check is ineffective, the packets might go
to the network without a proper checksum (the checksum field will have the to the network without a proper checksum (the checksum field will have the
pseudo IP header sum). For category 3, when the offload hardware does not pseudo IP header sum). For category 3, when the offload hardware does not
already expect the switch tag in use, the checksum must be calculated before any already expect the switch tag in use, the checksum must be calculated before any
tag is inserted (i.e. inside the tagger). Otherwise, the DSA master would tag is inserted (i.e. inside the tagger). Otherwise, the DSA conduit would
include the tail tag in the (software or hardware) checksum calculation. Then, include the tail tag in the (software or hardware) checksum calculation. Then,
when the tag gets stripped by the switch during transmission, it will leave an when the tag gets stripped by the switch during transmission, it will leave an
incorrect IP checksum in place. incorrect IP checksum in place.
Due to various reasons (most common being category 1 taggers being associated Due to various reasons (most common being category 1 taggers being associated
with DSA-unaware masters, mangling what the master perceives as MAC DA), the with DSA-unaware conduits, mangling what the conduit perceives as MAC DA), the
tagging protocol may require the DSA master to operate in promiscuous mode, to tagging protocol may require the DSA conduit to operate in promiscuous mode, to
receive all frames regardless of the value of the MAC DA. This can be done by receive all frames regardless of the value of the MAC DA. This can be done by
setting the ``promisc_on_master`` property of the ``struct dsa_device_ops``. setting the ``promisc_on_conduit`` property of the ``struct dsa_device_ops``.
Note that this assumes a DSA-unaware master driver, which is the norm. Note that this assumes a DSA-unaware conduit driver, which is the norm.
Master network devices Conduit network devices
---------------------- -----------------------
Master network devices are regular, unmodified Linux network device drivers for Conduit network devices are regular, unmodified Linux network device drivers for
the CPU/management Ethernet interface. Such a driver might occasionally need to the CPU/management Ethernet interface. Such a driver might occasionally need to
know whether DSA is enabled (e.g.: to enable/disable specific offload features), know whether DSA is enabled (e.g.: to enable/disable specific offload features),
but the DSA subsystem has been proven to work with industry standard drivers: but the DSA subsystem has been proven to work with industry standard drivers:
@ -232,14 +236,14 @@ Ethernet switch.
Networking stack hooks Networking stack hooks
---------------------- ----------------------
When a master netdev is used with DSA, a small hook is placed in the When a conduit netdev is used with DSA, a small hook is placed in the
networking stack is in order to have the DSA subsystem process the Ethernet networking stack is in order to have the DSA subsystem process the Ethernet
switch specific tagging protocol. DSA accomplishes this by registering a switch specific tagging protocol. DSA accomplishes this by registering a
specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the
networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical
Ethernet Frame receive sequence looks like this: Ethernet Frame receive sequence looks like this:
Master network device (e.g.: e1000e): Conduit network device (e.g.: e1000e):
1. Receive interrupt fires: 1. Receive interrupt fires:
@ -269,16 +273,16 @@ Master network device (e.g.: e1000e):
- inspect and strip switch tag protocol to determine originating port - inspect and strip switch tag protocol to determine originating port
- locate per-port network device - locate per-port network device
- invoke ``eth_type_trans()`` with the DSA slave network device - invoke ``eth_type_trans()`` with the DSA user network device
- invoked ``netif_receive_skb()`` - invoked ``netif_receive_skb()``
Past this point, the DSA slave network devices get delivered regular Ethernet Past this point, the DSA user network devices get delivered regular Ethernet
frames that can be processed by the networking stack. frames that can be processed by the networking stack.
Slave network devices User network devices
--------------------- --------------------
Slave network devices created by DSA are stacked on top of their master network User network devices created by DSA are stacked on top of their conduit network
device, each of these network interfaces will be responsible for being a device, each of these network interfaces will be responsible for being a
controlling and data-flowing end-point for each front-panel port of the switch. controlling and data-flowing end-point for each front-panel port of the switch.
These interfaces are specialized in order to: These interfaces are specialized in order to:
@ -289,31 +293,31 @@ These interfaces are specialized in order to:
Wake-on-LAN, register dumps... Wake-on-LAN, register dumps...
- manage external/internal PHY: link, auto-negotiation, etc. - manage external/internal PHY: link, auto-negotiation, etc.
These slave network devices have custom net_device_ops and ethtool_ops function These user network devices have custom net_device_ops and ethtool_ops function
pointers which allow DSA to introduce a level of layering between the networking pointers which allow DSA to introduce a level of layering between the networking
stack/ethtool and the switch driver implementation. stack/ethtool and the switch driver implementation.
Upon frame transmission from these slave network devices, DSA will look up which Upon frame transmission from these user network devices, DSA will look up which
switch tagging protocol is currently registered with these network devices and switch tagging protocol is currently registered with these network devices and
invoke a specific transmit routine which takes care of adding the relevant invoke a specific transmit routine which takes care of adding the relevant
switch tag in the Ethernet frames. switch tag in the Ethernet frames.
These frames are then queued for transmission using the master network device These frames are then queued for transmission using the conduit network device
``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the ``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the
Ethernet switch will be able to process these incoming frames from the Ethernet switch will be able to process these incoming frames from the
management interface and deliver them to the physical switch port. management interface and deliver them to the physical switch port.
When using multiple CPU ports, it is possible to stack a LAG (bonding/team) When using multiple CPU ports, it is possible to stack a LAG (bonding/team)
device between the DSA slave devices and the physical DSA masters. The LAG device between the DSA user devices and the physical DSA conduits. The LAG
device is thus also a DSA master, but the LAG slave devices continue to be DSA device is thus also a DSA conduit, but the LAG slave devices continue to be DSA
masters as well (just with no user port assigned to them; this is needed for conduits as well (just with no user port assigned to them; this is needed for
recovery in case the LAG DSA master disappears). Thus, the data path of the LAG recovery in case the LAG DSA conduit disappears). Thus, the data path of the LAG
DSA master is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which DSA conduit is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which
calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA master; calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA conduit;
LAG slave). Therefore, the RX data path of the LAG DSA master is not used. LAG slave). Therefore, the RX data path of the LAG DSA conduit is not used.
On the other hand, TX takes place linearly: ``dsa_slave_xmit`` calls On the other hand, TX takes place linearly: ``dsa_user_xmit`` calls
``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA master. ``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA conduit.
The latter calls ``dev_queue_xmit`` towards one physical DSA master or the The latter calls ``dev_queue_xmit`` towards one physical DSA conduit or the
other, and in both cases, the packet exits the system through a hardware path other, and in both cases, the packet exits the system through a hardware path
towards the switch. towards the switch.
@ -352,11 +356,11 @@ perspective::
|| swp0 | | swp1 | | swp2 | | swp3 || || swp0 | | swp1 | | swp2 | | swp3 ||
++------+-+------+-+------+-+------++ ++------+-+------+-+------+-+------++
Slave MDIO bus User MDIO bus
-------------- -------------
In order to be able to read to/from a switch PHY built into it, DSA creates a In order to be able to read to/from a switch PHY built into it, DSA creates an
slave MDIO bus which allows a specific switch driver to divert and intercept user MDIO bus which allows a specific switch driver to divert and intercept
MDIO reads/writes towards specific PHY addresses. In most MDIO-connected MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
switches, these functions would utilize direct or indirect PHY addressing mode switches, these functions would utilize direct or indirect PHY addressing mode
to return standard MII registers from the switch builtin PHYs, allowing the PHY to return standard MII registers from the switch builtin PHYs, allowing the PHY
@ -364,7 +368,7 @@ library and/or to return link status, link partner pages, auto-negotiation
results, etc. results, etc.
For Ethernet switches which have both external and internal MDIO buses, the For Ethernet switches which have both external and internal MDIO buses, the
slave MII bus can be utilized to mux/demux MDIO reads and writes towards either user MII bus can be utilized to mux/demux MDIO reads and writes towards either
internal or external MDIO devices this switch might be connected to: internal internal or external MDIO devices this switch might be connected to: internal
PHYs, external PHYs, or even external switches. PHYs, external PHYs, or even external switches.
@ -381,10 +385,10 @@ DSA data structures are defined in ``include/net/dsa.h`` as well as
- ``dsa_platform_data``: platform device configuration data which can reference - ``dsa_platform_data``: platform device configuration data which can reference
a collection of dsa_chip_data structures if multiple switches are cascaded, a collection of dsa_chip_data structures if multiple switches are cascaded,
the master network device this switch tree is attached to needs to be the conduit network device this switch tree is attached to needs to be
referenced referenced
- ``dsa_switch_tree``: structure assigned to the master network device under - ``dsa_switch_tree``: structure assigned to the conduit network device under
``dsa_ptr``, this structure references a dsa_platform_data structure as well as ``dsa_ptr``, this structure references a dsa_platform_data structure as well as
the tagging protocol supported by the switch tree, and which receive/transmit the tagging protocol supported by the switch tree, and which receive/transmit
function hooks should be invoked, information about the directly attached function hooks should be invoked, information about the directly attached
@ -392,7 +396,7 @@ DSA data structures are defined in ``include/net/dsa.h`` as well as
referenced to address individual switches in the tree. referenced to address individual switches in the tree.
- ``dsa_switch``: structure describing a switch device in the tree, referencing - ``dsa_switch``: structure describing a switch device in the tree, referencing
a ``dsa_switch_tree`` as a backpointer, slave network devices, master network a ``dsa_switch_tree`` as a backpointer, user network devices, conduit network
device, and a reference to the backing``dsa_switch_ops`` device, and a reference to the backing``dsa_switch_ops``
- ``dsa_switch_ops``: structure referencing function pointers, see below for a - ``dsa_switch_ops``: structure referencing function pointers, see below for a
@ -404,7 +408,7 @@ Design limitations
Lack of CPU/DSA network devices Lack of CPU/DSA network devices
------------------------------- -------------------------------
DSA does not currently create slave network devices for the CPU or DSA ports, as DSA does not currently create user network devices for the CPU or DSA ports, as
described before. This might be an issue in the following cases: described before. This might be an issue in the following cases:
- inability to fetch switch CPU port statistics counters using ethtool, which - inability to fetch switch CPU port statistics counters using ethtool, which
@ -419,7 +423,7 @@ described before. This might be an issue in the following cases:
Common pitfalls using DSA setups Common pitfalls using DSA setups
-------------------------------- --------------------------------
Once a master network device is configured to use DSA (dev->dsa_ptr becomes Once a conduit network device is configured to use DSA (dev->dsa_ptr becomes
non-NULL), and the switch behind it expects a tagging protocol, this network non-NULL), and the switch behind it expects a tagging protocol, this network
interface can only exclusively be used as a conduit interface. Sending packets interface can only exclusively be used as a conduit interface. Sending packets
directly through this interface (e.g.: opening a socket using this interface) directly through this interface (e.g.: opening a socket using this interface)
@ -440,7 +444,7 @@ DSA currently leverages the following subsystems:
MDIO/PHY library MDIO/PHY library
---------------- ----------------
Slave network devices exposed by DSA may or may not be interfacing with PHY User network devices exposed by DSA may or may not be interfacing with PHY
devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA
subsystem deals with all possible combinations: subsystem deals with all possible combinations:
@ -450,7 +454,7 @@ subsystem deals with all possible combinations:
- special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a - special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a
fixed PHYs fixed PHYs
The PHY configuration is done by the ``dsa_slave_phy_setup()`` function and the The PHY configuration is done by the ``dsa_user_phy_setup()`` function and the
logic basically looks like this: logic basically looks like this:
- if Device Tree is used, the PHY device is looked up using the standard - if Device Tree is used, the PHY device is looked up using the standard
@ -463,7 +467,7 @@ logic basically looks like this:
and connected transparently using the special fixed MDIO bus driver and connected transparently using the special fixed MDIO bus driver
- finally, if the PHY is built into the switch, as is very common with - finally, if the PHY is built into the switch, as is very common with
standalone switch packages, the PHY is probed using the slave MII bus created standalone switch packages, the PHY is probed using the user MII bus created
by DSA by DSA
@ -472,7 +476,7 @@ SWITCHDEV
DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and
more specifically with its VLAN filtering portion when configuring VLANs on top more specifically with its VLAN filtering portion when configuring VLANs on top
of per-port slave network devices. As of today, the only SWITCHDEV objects of per-port user network devices. As of today, the only SWITCHDEV objects
supported by DSA are the FDB and VLAN objects. supported by DSA are the FDB and VLAN objects.
Devlink Devlink
@ -589,8 +593,8 @@ is torn down when the first switch unregisters.
It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
version of the full teardown performed by ``dsa_unregister_switch()``). version of the full teardown performed by ``dsa_unregister_switch()``).
The reason is that DSA keeps a reference on the master net device, and if the The reason is that DSA keeps a reference on the conduit net device, and if the
driver for the master device decides to unbind on shutdown, DSA's reference driver for the conduit device decides to unbind on shutdown, DSA's reference
will block that operation from finalizing. will block that operation from finalizing.
Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called, Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
@ -615,7 +619,7 @@ Switch configuration
tag formats. tag formats.
- ``change_tag_protocol``: when the default tagging protocol has compatibility - ``change_tag_protocol``: when the default tagging protocol has compatibility
problems with the master or other issues, the driver may support changing it problems with the conduit or other issues, the driver may support changing it
at runtime, either through a device tree property or through sysfs. In that at runtime, either through a device tree property or through sysfs. In that
case, further calls to ``get_tag_protocol`` should report the protocol in case, further calls to ``get_tag_protocol`` should report the protocol in
current use. current use.
@ -643,22 +647,22 @@ Switch configuration
PHY cannot be found. In this case, probing of the DSA switch continues PHY cannot be found. In this case, probing of the DSA switch continues
without that particular port. without that particular port.
- ``port_change_master``: method through which the affinity (association used - ``port_change_conduit``: method through which the affinity (association used
for traffic termination purposes) between a user port and a CPU port can be for traffic termination purposes) between a user port and a CPU port can be
changed. By default all user ports from a tree are assigned to the first changed. By default all user ports from a tree are assigned to the first
available CPU port that makes sense for them (most of the times this means available CPU port that makes sense for them (most of the times this means
the user ports of a tree are all assigned to the same CPU port, except for H the user ports of a tree are all assigned to the same CPU port, except for H
topologies as described in commit 2c0b03258b8b). The ``port`` argument topologies as described in commit 2c0b03258b8b). The ``port`` argument
represents the index of the user port, and the ``master`` argument represents represents the index of the user port, and the ``conduit`` argument represents
the new DSA master ``net_device``. The CPU port associated with the new the new DSA conduit ``net_device``. The CPU port associated with the new
master can be retrieved by looking at ``struct dsa_port *cpu_dp = conduit can be retrieved by looking at ``struct dsa_port *cpu_dp =
master->dsa_ptr``. Additionally, the master can also be a LAG device where conduit->dsa_ptr``. Additionally, the conduit can also be a LAG device where
all the slave devices are physical DSA masters. LAG DSA masters also have a all the slave devices are physical DSA conduits. LAG DSA also have a
valid ``master->dsa_ptr`` pointer, however this is not unique, but rather a valid ``conduit->dsa_ptr`` pointer, however this is not unique, but rather a
duplicate of the first physical DSA master's (LAG slave) ``dsa_ptr``. In case duplicate of the first physical DSA conduit's (LAG slave) ``dsa_ptr``. In case
of a LAG DSA master, a further call to ``port_lag_join`` will be emitted of a LAG DSA conduit, a further call to ``port_lag_join`` will be emitted
separately for the physical CPU ports associated with the physical DSA separately for the physical CPU ports associated with the physical DSA
masters, requesting them to create a hardware LAG associated with the LAG conduits, requesting them to create a hardware LAG associated with the LAG
interface. interface.
PHY devices and link management PHY devices and link management
@ -670,16 +674,16 @@ PHY devices and link management
should return a 32-bit bitmask of "flags" that is private between the switch should return a 32-bit bitmask of "flags" that is private between the switch
driver and the Ethernet PHY driver in ``drivers/net/phy/\*``. driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.
- ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read - ``phy_read``: Function invoked by the DSA user MDIO bus when attempting to read
the switch port MDIO registers. If unavailable, return 0xffff for each read. the switch port MDIO registers. If unavailable, return 0xffff for each read.
For builtin switch Ethernet PHYs, this function should allow reading the link For builtin switch Ethernet PHYs, this function should allow reading the link
status, auto-negotiation results, link partner pages, etc. status, auto-negotiation results, link partner pages, etc.
- ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write - ``phy_write``: Function invoked by the DSA user MDIO bus when attempting to write
to the switch port MDIO registers. If unavailable return a negative error to the switch port MDIO registers. If unavailable return a negative error
code. code.
- ``adjust_link``: Function invoked by the PHY library when a slave network device - ``adjust_link``: Function invoked by the PHY library when a user network device
is attached to a PHY device. This function is responsible for appropriately is attached to a PHY device. This function is responsible for appropriately
configuring the switch port link parameters: speed, duplex, pause based on configuring the switch port link parameters: speed, duplex, pause based on
what the ``phy_device`` is providing. what the ``phy_device`` is providing.
@ -698,14 +702,14 @@ Ethtool operations
typically return statistics strings, private flags strings, etc. typically return statistics strings, private flags strings, etc.
- ``get_ethtool_stats``: ethtool function used to query per-port statistics and - ``get_ethtool_stats``: ethtool function used to query per-port statistics and
return their values. DSA overlays slave network devices general statistics: return their values. DSA overlays user network devices general statistics:
RX/TX counters from the network device, with switch driver specific statistics RX/TX counters from the network device, with switch driver specific statistics
per port per port
- ``get_sset_count``: ethtool function used to query the number of statistics items - ``get_sset_count``: ethtool function used to query the number of statistics items
- ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this - ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this
function may for certain implementations also query the master network device function may for certain implementations also query the conduit network device
Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
- ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port, - ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,
@ -747,13 +751,13 @@ Power management
should resume all Ethernet switch activities and re-configure the switch to be should resume all Ethernet switch activities and re-configure the switch to be
in a fully active state in a fully active state
- ``port_enable``: function invoked by the DSA slave network device ndo_open - ``port_enable``: function invoked by the DSA user network device ndo_open
function when a port is administratively brought up, this function should function when a port is administratively brought up, this function should
fully enable a given switch port. DSA takes care of marking the port with fully enable a given switch port. DSA takes care of marking the port with
``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it ``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it
was not, and propagating these changes down to the hardware was not, and propagating these changes down to the hardware
- ``port_disable``: function invoked by the DSA slave network device ndo_close - ``port_disable``: function invoked by the DSA user network device ndo_close
function when a port is administratively brought down, this function should function when a port is administratively brought down, this function should
fully disable a given switch port. DSA takes care of marking the port with fully disable a given switch port. DSA takes care of marking the port with
``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is

View File

@ -4,7 +4,7 @@ LAN9303 Ethernet switch driver
The LAN9303 is a three port 10/100 Mbps ethernet switch with integrated phys for The LAN9303 is a three port 10/100 Mbps ethernet switch with integrated phys for
the two external ethernet ports. The third port is an RMII/MII interface to a the two external ethernet ports. The third port is an RMII/MII interface to a
host master network interface (e.g. fixed link). host conduit network interface (e.g. fixed link).
Driver details Driver details

View File

@ -79,7 +79,7 @@ The hardware tags all traffic internally with a port-based VLAN (pvid), or it
decodes the VLAN information from the 802.1Q tag. Advanced VLAN classification decodes the VLAN information from the 802.1Q tag. Advanced VLAN classification
is not possible. Once attributed a VLAN tag, frames are checked against the is not possible. Once attributed a VLAN tag, frames are checked against the
port's membership rules and dropped at ingress if they don't match any VLAN. port's membership rules and dropped at ingress if they don't match any VLAN.
This behavior is available when switch ports are enslaved to a bridge with This behavior is available when switch ports join a bridge with
``vlan_filtering 1``. ``vlan_filtering 1``.
Normally the hardware is not configurable with respect to VLAN awareness, but Normally the hardware is not configurable with respect to VLAN awareness, but
@ -122,7 +122,7 @@ on egress. Using ``vlan_filtering=1``, the behavior is the other way around:
offloaded flows can be steered to TX queues based on the VLAN PCP, but the DSA offloaded flows can be steered to TX queues based on the VLAN PCP, but the DSA
net devices are no longer able to do that. To inject frames into a hardware TX net devices are no longer able to do that. To inject frames into a hardware TX
queue with VLAN awareness active, it is necessary to create a VLAN queue with VLAN awareness active, it is necessary to create a VLAN
sub-interface on the DSA master port, and send normal (0x8100) VLAN-tagged sub-interface on the DSA conduit port, and send normal (0x8100) VLAN-tagged
towards the switch, with the VLAN PCP bits set appropriately. towards the switch, with the VLAN PCP bits set appropriately.
Management traffic (having DMAC 01-80-C2-xx-xx-xx or 01-19-1B-xx-xx-xx) is the Management traffic (having DMAC 01-80-C2-xx-xx-xx or 01-19-1B-xx-xx-xx) is the
@ -389,7 +389,7 @@ MDIO bus and PHY management
The SJA1105 does not have an MDIO bus and does not perform in-band AN either. The SJA1105 does not have an MDIO bus and does not perform in-band AN either.
Therefore there is no link state notification coming from the switch device. Therefore there is no link state notification coming from the switch device.
A board would need to hook up the PHYs connected to the switch to any other A board would need to hook up the PHYs connected to the switch to any other
MDIO bus available to Linux within the system (e.g. to the DSA master's MDIO MDIO bus available to Linux within the system (e.g. to the DSA conduit's MDIO
bus). Link state management then works by the driver manually keeping in sync bus). Link state management then works by the driver manually keeping in sync
(over SPI commands) the MAC link speed with the settings negotiated by the PHY. (over SPI commands) the MAC link speed with the settings negotiated by the PHY.

View File

@ -13,7 +13,7 @@
/ { / {
aliases { aliases {
ethernet0 = &eth0; ethernet0 = &eth0;
/* for dsa slave device */ /* for DSA user port device */
ethernet1 = &switch0port1; ethernet1 = &switch0port1;
ethernet2 = &switch0port2; ethernet2 = &switch0port2;
ethernet3 = &switch0port3; ethernet3 = &switch0port3;

View File

@ -757,7 +757,7 @@ int b53_configure_vlan(struct dsa_switch *ds)
/* Create an untagged VLAN entry for the default PVID in case /* Create an untagged VLAN entry for the default PVID in case
* CONFIG_VLAN_8021Q is disabled and there are no calls to * CONFIG_VLAN_8021Q is disabled and there are no calls to
* dsa_slave_vlan_rx_add_vid() to create the default VLAN * dsa_user_vlan_rx_add_vid() to create the default VLAN
* entry. Do this only when the tagging protocol is not * entry. Do this only when the tagging protocol is not
* DSA_TAG_PROTO_NONE * DSA_TAG_PROTO_NONE
*/ */
@ -958,7 +958,7 @@ static struct phy_device *b53_get_phy_device(struct dsa_switch *ds, int port)
return NULL; return NULL;
} }
return mdiobus_get_phy(ds->slave_mii_bus, port); return mdiobus_get_phy(ds->user_mii_bus, port);
} }
void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset, void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset,

View File

@ -329,7 +329,7 @@ static int b53_mdio_probe(struct mdio_device *mdiodev)
* layer setup * layer setup
*/ */
if (of_machine_is_compatible("brcm,bcm7445d0") && if (of_machine_is_compatible("brcm,bcm7445d0") &&
strcmp(mdiodev->bus->name, "sf2 slave mii")) strcmp(mdiodev->bus->name, "sf2 user mii"))
return -EPROBE_DEFER; return -EPROBE_DEFER;
dev = b53_switch_alloc(&mdiodev->dev, &b53_mdio_ops, mdiodev->bus); dev = b53_switch_alloc(&mdiodev->dev, &b53_mdio_ops, mdiodev->bus);

View File

@ -623,19 +623,19 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
priv->master_mii_dn = dn; priv->master_mii_dn = dn;
priv->slave_mii_bus = mdiobus_alloc(); priv->user_mii_bus = mdiobus_alloc();
if (!priv->slave_mii_bus) { if (!priv->user_mii_bus) {
err = -ENOMEM; err = -ENOMEM;
goto err_put_master_mii_bus_dev; goto err_put_master_mii_bus_dev;
} }
priv->slave_mii_bus->priv = priv; priv->user_mii_bus->priv = priv;
priv->slave_mii_bus->name = "sf2 slave mii"; priv->user_mii_bus->name = "sf2 user mii";
priv->slave_mii_bus->read = bcm_sf2_sw_mdio_read; priv->user_mii_bus->read = bcm_sf2_sw_mdio_read;
priv->slave_mii_bus->write = bcm_sf2_sw_mdio_write; priv->user_mii_bus->write = bcm_sf2_sw_mdio_write;
snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d", snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d",
index++); index++);
priv->slave_mii_bus->dev.of_node = dn; priv->user_mii_bus->dev.of_node = dn;
/* Include the pseudo-PHY address to divert reads towards our /* Include the pseudo-PHY address to divert reads towards our
* workaround. This is only required for 7445D0, since 7445E0 * workaround. This is only required for 7445D0, since 7445E0
@ -653,9 +653,9 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
priv->indir_phy_mask = 0; priv->indir_phy_mask = 0;
ds->phys_mii_mask = priv->indir_phy_mask; ds->phys_mii_mask = priv->indir_phy_mask;
ds->slave_mii_bus = priv->slave_mii_bus; ds->user_mii_bus = priv->user_mii_bus;
priv->slave_mii_bus->parent = ds->dev->parent; priv->user_mii_bus->parent = ds->dev->parent;
priv->slave_mii_bus->phy_mask = ~priv->indir_phy_mask; priv->user_mii_bus->phy_mask = ~priv->indir_phy_mask;
/* We need to make sure that of_phy_connect() will not work by /* We need to make sure that of_phy_connect() will not work by
* removing the 'phandle' and 'linux,phandle' properties and * removing the 'phandle' and 'linux,phandle' properties and
@ -682,14 +682,14 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
phy_device_remove(phydev); phy_device_remove(phydev);
} }
err = mdiobus_register(priv->slave_mii_bus); err = mdiobus_register(priv->user_mii_bus);
if (err && dn) if (err && dn)
goto err_free_slave_mii_bus; goto err_free_user_mii_bus;
return 0; return 0;
err_free_slave_mii_bus: err_free_user_mii_bus:
mdiobus_free(priv->slave_mii_bus); mdiobus_free(priv->user_mii_bus);
err_put_master_mii_bus_dev: err_put_master_mii_bus_dev:
put_device(&priv->master_mii_bus->dev); put_device(&priv->master_mii_bus->dev);
err_of_node_put: err_of_node_put:
@ -699,10 +699,9 @@ err_of_node_put:
static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv) static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
{ {
mdiobus_unregister(priv->slave_mii_bus); mdiobus_unregister(priv->user_mii_bus);
mdiobus_free(priv->slave_mii_bus); mdiobus_free(priv->user_mii_bus);
put_device(&priv->master_mii_bus->dev); put_device(&priv->master_mii_bus->dev);
of_node_put(priv->master_mii_dn);
} }
static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port) static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port)
@ -915,7 +914,7 @@ static void bcm_sf2_sw_fixed_state(struct dsa_switch *ds, int port,
* state machine and make it go in PHY_FORCING state instead. * state machine and make it go in PHY_FORCING state instead.
*/ */
if (!status->link) if (!status->link)
netif_carrier_off(dsa_to_port(ds, port)->slave); netif_carrier_off(dsa_to_port(ds, port)->user);
status->duplex = DUPLEX_FULL; status->duplex = DUPLEX_FULL;
} else { } else {
status->link = true; status->link = true;
@ -989,7 +988,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port, static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol) struct ethtool_wolinfo *wol)
{ {
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
struct ethtool_wolinfo pwol = { }; struct ethtool_wolinfo pwol = { };
@ -1013,7 +1012,7 @@ static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port, static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol) struct ethtool_wolinfo *wol)
{ {
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index; s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
struct ethtool_wolinfo pwol = { }; struct ethtool_wolinfo pwol = { };

View File

@ -108,7 +108,7 @@ struct bcm_sf2_priv {
/* Master and slave MDIO bus controller */ /* Master and slave MDIO bus controller */
unsigned int indir_phy_mask; unsigned int indir_phy_mask;
struct device_node *master_mii_dn; struct device_node *master_mii_dn;
struct mii_bus *slave_mii_bus; struct mii_bus *user_mii_bus;
struct mii_bus *master_mii_bus; struct mii_bus *master_mii_bus;
/* Bitmask of ports needing BRCM tags */ /* Bitmask of ports needing BRCM tags */

View File

@ -1102,7 +1102,7 @@ static int bcm_sf2_cfp_rule_get_all(struct bcm_sf2_priv *priv,
int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port, int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc, u32 *rule_locs) struct ethtool_rxnfc *nfc, u32 *rule_locs)
{ {
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0; int ret = 0;
@ -1145,7 +1145,7 @@ int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port, int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc) struct ethtool_rxnfc *nfc)
{ {
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0; int ret = 0;

View File

@ -1084,7 +1084,7 @@ static int lan9303_port_enable(struct dsa_switch *ds, int port,
if (!dsa_port_is_user(dp)) if (!dsa_port_is_user(dp))
return 0; return 0;
vlan_vid_add(dsa_port_to_master(dp), htons(ETH_P_8021Q), port); vlan_vid_add(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port);
return lan9303_enable_processing_port(chip, port); return lan9303_enable_processing_port(chip, port);
} }
@ -1097,7 +1097,7 @@ static void lan9303_port_disable(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp)) if (!dsa_port_is_user(dp))
return; return;
vlan_vid_del(dsa_port_to_master(dp), htons(ETH_P_8021Q), port); vlan_vid_del(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port);
lan9303_disable_processing_port(chip, port); lan9303_disable_processing_port(chip, port);
lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN); lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN);

View File

@ -510,22 +510,22 @@ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
struct dsa_switch *ds = priv->ds; struct dsa_switch *ds = priv->ds;
int err; int err;
ds->slave_mii_bus = mdiobus_alloc(); ds->user_mii_bus = mdiobus_alloc();
if (!ds->slave_mii_bus) if (!ds->user_mii_bus)
return -ENOMEM; return -ENOMEM;
ds->slave_mii_bus->priv = priv; ds->user_mii_bus->priv = priv;
ds->slave_mii_bus->read = gswip_mdio_rd; ds->user_mii_bus->read = gswip_mdio_rd;
ds->slave_mii_bus->write = gswip_mdio_wr; ds->user_mii_bus->write = gswip_mdio_wr;
ds->slave_mii_bus->name = "lantiq,xrx200-mdio"; ds->user_mii_bus->name = "lantiq,xrx200-mdio";
snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii", snprintf(ds->user_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii",
dev_name(priv->dev)); dev_name(priv->dev));
ds->slave_mii_bus->parent = priv->dev; ds->user_mii_bus->parent = priv->dev;
ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask; ds->user_mii_bus->phy_mask = ~ds->phys_mii_mask;
err = of_mdiobus_register(ds->slave_mii_bus, mdio_np); err = of_mdiobus_register(ds->user_mii_bus, mdio_np);
if (err) if (err)
mdiobus_free(ds->slave_mii_bus); mdiobus_free(ds->user_mii_bus);
return err; return err;
} }
@ -2196,8 +2196,8 @@ disable_switch:
dsa_unregister_switch(priv->ds); dsa_unregister_switch(priv->ds);
mdio_bus: mdio_bus:
if (mdio_np) { if (mdio_np) {
mdiobus_unregister(priv->ds->slave_mii_bus); mdiobus_unregister(priv->ds->user_mii_bus);
mdiobus_free(priv->ds->slave_mii_bus); mdiobus_free(priv->ds->user_mii_bus);
} }
put_mdio_node: put_mdio_node:
of_node_put(mdio_np); of_node_put(mdio_np);
@ -2219,10 +2219,10 @@ static void gswip_remove(struct platform_device *pdev)
dsa_unregister_switch(priv->ds); dsa_unregister_switch(priv->ds);
if (priv->ds->slave_mii_bus) { if (priv->ds->user_mii_bus) {
mdiobus_unregister(priv->ds->slave_mii_bus); mdiobus_unregister(priv->ds->user_mii_bus);
of_node_put(priv->ds->slave_mii_bus->dev.of_node); of_node_put(priv->ds->user_mii_bus->dev.of_node);
mdiobus_free(priv->ds->slave_mii_bus); mdiobus_free(priv->ds->user_mii_bus);
} }
for (i = 0; i < priv->num_gphy_fw; i++) for (i = 0; i < priv->num_gphy_fw; i++)

View File

@ -1170,7 +1170,7 @@ int ksz9477_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val)
void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr) void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr)
{ {
struct ksz_device *dev = ds->priv; struct ksz_device *dev = ds->priv;
struct net_device *slave; struct net_device *user;
struct dsa_port *hsr_dp; struct dsa_port *hsr_dp;
u8 data, hsr_ports = 0; u8 data, hsr_ports = 0;
@ -1202,8 +1202,8 @@ void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr)
ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, PORT_SRC_ADDR_FILTER, true); ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, PORT_SRC_ADDR_FILTER, true);
/* Setup HW supported features for lan HSR ports */ /* Setup HW supported features for lan HSR ports */
slave = dsa_to_port(ds, port)->slave; user = dsa_to_port(ds, port)->user;
slave->features |= KSZ9477_SUPPORTED_HSR_FEATURES; user->features |= KSZ9477_SUPPORTED_HSR_FEATURES;
} }
void ksz9477_hsr_leave(struct dsa_switch *ds, int port, struct net_device *hsr) void ksz9477_hsr_leave(struct dsa_switch *ds, int port, struct net_device *hsr)

View File

@ -1945,14 +1945,14 @@ static int ksz_irq_phy_setup(struct ksz_device *dev)
ret = irq; ret = irq;
goto out; goto out;
} }
ds->slave_mii_bus->irq[phy] = irq; ds->user_mii_bus->irq[phy] = irq;
} }
} }
return 0; return 0;
out: out:
while (phy--) while (phy--)
if (BIT(phy) & ds->phys_mii_mask) if (BIT(phy) & ds->phys_mii_mask)
irq_dispose_mapping(ds->slave_mii_bus->irq[phy]); irq_dispose_mapping(ds->user_mii_bus->irq[phy]);
return ret; return ret;
} }
@ -1964,7 +1964,7 @@ static void ksz_irq_phy_free(struct ksz_device *dev)
for (phy = 0; phy < KSZ_MAX_NUM_PORTS; phy++) for (phy = 0; phy < KSZ_MAX_NUM_PORTS; phy++)
if (BIT(phy) & ds->phys_mii_mask) if (BIT(phy) & ds->phys_mii_mask)
irq_dispose_mapping(ds->slave_mii_bus->irq[phy]); irq_dispose_mapping(ds->user_mii_bus->irq[phy]);
} }
static int ksz_mdio_register(struct ksz_device *dev) static int ksz_mdio_register(struct ksz_device *dev)
@ -1987,12 +1987,12 @@ static int ksz_mdio_register(struct ksz_device *dev)
bus->priv = dev; bus->priv = dev;
bus->read = ksz_sw_mdio_read; bus->read = ksz_sw_mdio_read;
bus->write = ksz_sw_mdio_write; bus->write = ksz_sw_mdio_write;
bus->name = "ksz slave smi"; bus->name = "ksz user smi";
snprintf(bus->id, MII_BUS_ID_SIZE, "SMI-%d", ds->index); snprintf(bus->id, MII_BUS_ID_SIZE, "SMI-%d", ds->index);
bus->parent = ds->dev; bus->parent = ds->dev;
bus->phy_mask = ~ds->phys_mii_mask; bus->phy_mask = ~ds->phys_mii_mask;
ds->slave_mii_bus = bus; ds->user_mii_bus = bus;
if (dev->irq > 0) { if (dev->irq > 0) {
ret = ksz_irq_phy_setup(dev); ret = ksz_irq_phy_setup(dev);
@ -2344,7 +2344,7 @@ static void ksz_mib_read_work(struct work_struct *work)
if (!p->read) { if (!p->read) {
const struct dsa_port *dp = dsa_to_port(dev->ds, i); const struct dsa_port *dp = dsa_to_port(dev->ds, i);
if (!netif_carrier_ok(dp->slave)) if (!netif_carrier_ok(dp->user))
mib->cnt_ptr = dev->info->reg_mib_cnt; mib->cnt_ptr = dev->info->reg_mib_cnt;
} }
port_r_cnt(dev, i); port_r_cnt(dev, i);
@ -2464,7 +2464,7 @@ static void ksz_get_ethtool_stats(struct dsa_switch *ds, int port,
mutex_lock(&mib->cnt_mutex); mutex_lock(&mib->cnt_mutex);
/* Only read dropped counters if no link. */ /* Only read dropped counters if no link. */
if (!netif_carrier_ok(dp->slave)) if (!netif_carrier_ok(dp->user))
mib->cnt_ptr = dev->info->reg_mib_cnt; mib->cnt_ptr = dev->info->reg_mib_cnt;
port_r_cnt(dev, port); port_r_cnt(dev, port);
memcpy(buf, mib->counters, dev->info->mib_cnt * sizeof(u64)); memcpy(buf, mib->counters, dev->info->mib_cnt * sizeof(u64));
@ -2574,7 +2574,7 @@ static int ksz_port_setup(struct dsa_switch *ds, int port)
if (!dsa_is_user_port(ds, port)) if (!dsa_is_user_port(ds, port))
return 0; return 0;
/* setup slave port */ /* setup user port */
dev->dev_ops->port_setup(dev, port, false); dev->dev_ops->port_setup(dev, port, false);
/* port_stp_state_set() will be called after to enable the port so /* port_stp_state_set() will be called after to enable the port so
@ -3567,8 +3567,8 @@ static int ksz_port_set_mac_address(struct dsa_switch *ds, int port,
static int ksz_switch_macaddr_get(struct dsa_switch *ds, int port, static int ksz_switch_macaddr_get(struct dsa_switch *ds, int port,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct net_device *slave = dsa_to_port(ds, port)->slave; struct net_device *user = dsa_to_port(ds, port)->user;
const unsigned char *addr = slave->dev_addr; const unsigned char *addr = user->dev_addr;
struct ksz_switch_macaddr *switch_macaddr; struct ksz_switch_macaddr *switch_macaddr;
struct ksz_device *dev = ds->priv; struct ksz_device *dev = ds->priv;
const u16 *regs = dev->info->regs; const u16 *regs = dev->info->regs;

View File

@ -557,7 +557,7 @@ static void ksz_ptp_txtstamp_skb(struct ksz_device *dev,
struct skb_shared_hwtstamps hwtstamps = {}; struct skb_shared_hwtstamps hwtstamps = {};
int ret; int ret;
/* timeout must include DSA master to transmit data, tstamp latency, /* timeout must include DSA conduit to transmit data, tstamp latency,
* IRQ latency and time for reading the time stamp. * IRQ latency and time for reading the time stamp.
*/ */
ret = wait_for_completion_timeout(&prt->tstamp_msg_comp, ret = wait_for_completion_timeout(&prt->tstamp_msg_comp,

View File

@ -1113,7 +1113,7 @@ mt7530_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
u32 val; u32 val;
/* When a new MTU is set, DSA always set the CPU port's MTU to the /* When a new MTU is set, DSA always set the CPU port's MTU to the
* largest MTU of the slave ports. Because the switch only has a global * largest MTU of the user ports. Because the switch only has a global
* RX length register, only allowing CPU port here is enough. * RX length register, only allowing CPU port here is enough.
*/ */
if (!dsa_is_cpu_port(ds, port)) if (!dsa_is_cpu_port(ds, port))
@ -2069,7 +2069,7 @@ mt7530_setup_mdio_irq(struct mt7530_priv *priv)
unsigned int irq; unsigned int irq;
irq = irq_create_mapping(priv->irq_domain, p); irq = irq_create_mapping(priv->irq_domain, p);
ds->slave_mii_bus->irq[p] = irq; ds->user_mii_bus->irq[p] = irq;
} }
} }
} }
@ -2163,7 +2163,7 @@ mt7530_setup_mdio(struct mt7530_priv *priv)
if (!bus) if (!bus)
return -ENOMEM; return -ENOMEM;
ds->slave_mii_bus = bus; ds->user_mii_bus = bus;
bus->priv = priv; bus->priv = priv;
bus->name = KBUILD_MODNAME "-mii"; bus->name = KBUILD_MODNAME "-mii";
snprintf(bus->id, MII_BUS_ID_SIZE, KBUILD_MODNAME "-%d", idx++); snprintf(bus->id, MII_BUS_ID_SIZE, KBUILD_MODNAME "-%d", idx++);
@ -2200,20 +2200,20 @@ mt7530_setup(struct dsa_switch *ds)
u32 id, val; u32 id, val;
int ret, i; int ret, i;
/* The parent node of master netdev which holds the common system /* The parent node of conduit netdev which holds the common system
* controller also is the container for two GMACs nodes representing * controller also is the container for two GMACs nodes representing
* as two netdev instances. * as two netdev instances.
*/ */
dsa_switch_for_each_cpu_port(cpu_dp, ds) { dsa_switch_for_each_cpu_port(cpu_dp, ds) {
dn = cpu_dp->master->dev.of_node->parent; dn = cpu_dp->conduit->dev.of_node->parent;
/* It doesn't matter which CPU port is found first, /* It doesn't matter which CPU port is found first,
* their masters should share the same parent OF node * their conduits should share the same parent OF node
*/ */
break; break;
} }
if (!dn) { if (!dn) {
dev_err(ds->dev, "parent OF node of DSA master not found"); dev_err(ds->dev, "parent OF node of DSA conduit not found");
return -EINVAL; return -EINVAL;
} }
@ -2488,7 +2488,7 @@ mt7531_setup(struct dsa_switch *ds)
if (mt7531_dual_sgmii_supported(priv)) { if (mt7531_dual_sgmii_supported(priv)) {
priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII; priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII;
/* Let ds->slave_mii_bus be able to access external phy. */ /* Let ds->user_mii_bus be able to access external phy. */
mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO11_RG_RXD2_MASK, mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO11_RG_RXD2_MASK,
MT7531_EXT_P_MDC_11); MT7531_EXT_P_MDC_11);
mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO12_RG_RXD3_MASK, mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO12_RG_RXD3_MASK,
@ -2717,7 +2717,7 @@ mt7531_mac_config(struct dsa_switch *ds, int port, unsigned int mode,
case PHY_INTERFACE_MODE_RGMII_RXID: case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_TXID: case PHY_INTERFACE_MODE_RGMII_TXID:
dp = dsa_to_port(ds, port); dp = dsa_to_port(ds, port);
phydev = dp->slave->phydev; phydev = dp->user->phydev;
return mt7531_rgmii_setup(priv, port, interface, phydev); return mt7531_rgmii_setup(priv, port, interface, phydev);
case PHY_INTERFACE_MODE_SGMII: case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_NA: case PHY_INTERFACE_MODE_NA:

View File

@ -2486,7 +2486,7 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
else else
member = MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_TAGGED; member = MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_TAGGED;
/* net/dsa/slave.c will call dsa_port_vlan_add() for the affected port /* net/dsa/user.c will call dsa_port_vlan_add() for the affected port
* and then the CPU port. Do not warn for duplicates for the CPU port. * and then the CPU port. Do not warn for duplicates for the CPU port.
*/ */
warn = !dsa_is_cpu_port(ds, port) && !dsa_is_dsa_port(ds, port); warn = !dsa_is_cpu_port(ds, port) && !dsa_is_dsa_port(ds, port);
@ -3719,7 +3719,7 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
return err; return err;
chip->ds = ds; chip->ds = ds;
ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip); ds->user_mii_bus = mv88e6xxx_default_mdio_bus(chip);
/* Since virtual bridges are mapped in the PVT, the number we support /* Since virtual bridges are mapped in the PVT, the number we support
* depends on the physical switch topology. We need to let DSA figure * depends on the physical switch topology. We need to let DSA figure

View File

@ -42,22 +42,22 @@ static struct net_device *felix_classify_db(struct dsa_db db)
} }
} }
static int felix_cpu_port_for_master(struct dsa_switch *ds, static int felix_cpu_port_for_conduit(struct dsa_switch *ds,
struct net_device *master) struct net_device *conduit)
{ {
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
struct dsa_port *cpu_dp; struct dsa_port *cpu_dp;
int lag; int lag;
if (netif_is_lag_master(master)) { if (netif_is_lag_master(conduit)) {
mutex_lock(&ocelot->fwd_domain_lock); mutex_lock(&ocelot->fwd_domain_lock);
lag = ocelot_bond_get_id(ocelot, master); lag = ocelot_bond_get_id(ocelot, conduit);
mutex_unlock(&ocelot->fwd_domain_lock); mutex_unlock(&ocelot->fwd_domain_lock);
return lag; return lag;
} }
cpu_dp = master->dsa_ptr; cpu_dp = conduit->dsa_ptr;
return cpu_dp->index; return cpu_dp->index;
} }
@ -366,7 +366,7 @@ static int felix_update_trapping_destinations(struct dsa_switch *ds,
* is the mode through which frames can be injected from and extracted to an * is the mode through which frames can be injected from and extracted to an
* external CPU, over Ethernet. In NXP SoCs, the "external CPU" is the ARM CPU * external CPU, over Ethernet. In NXP SoCs, the "external CPU" is the ARM CPU
* running Linux, and this forms a DSA setup together with the enetc or fman * running Linux, and this forms a DSA setup together with the enetc or fman
* DSA master. * DSA conduit.
*/ */
static void felix_npi_port_init(struct ocelot *ocelot, int port) static void felix_npi_port_init(struct ocelot *ocelot, int port)
{ {
@ -441,16 +441,16 @@ static unsigned long felix_tag_npi_get_host_fwd_mask(struct dsa_switch *ds)
return BIT(ocelot->num_phys_ports); return BIT(ocelot->num_phys_ports);
} }
static int felix_tag_npi_change_master(struct dsa_switch *ds, int port, static int felix_tag_npi_change_conduit(struct dsa_switch *ds, int port,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct dsa_port *dp = dsa_to_port(ds, port), *other_dp; struct dsa_port *dp = dsa_to_port(ds, port), *other_dp;
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
if (netif_is_lag_master(master)) { if (netif_is_lag_master(conduit)) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"LAG DSA master only supported using ocelot-8021q"); "LAG DSA conduit only supported using ocelot-8021q");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
@ -459,24 +459,24 @@ static int felix_tag_npi_change_master(struct dsa_switch *ds, int port,
* come back up until they're all changed to the new one. * come back up until they're all changed to the new one.
*/ */
dsa_switch_for_each_user_port(other_dp, ds) { dsa_switch_for_each_user_port(other_dp, ds) {
struct net_device *slave = other_dp->slave; struct net_device *user = other_dp->user;
if (other_dp != dp && (slave->flags & IFF_UP) && if (other_dp != dp && (user->flags & IFF_UP) &&
dsa_port_to_master(other_dp) != master) { dsa_port_to_conduit(other_dp) != conduit) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Cannot change while old master still has users"); "Cannot change while old conduit still has users");
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
} }
felix_npi_port_deinit(ocelot, ocelot->npi); felix_npi_port_deinit(ocelot, ocelot->npi);
felix_npi_port_init(ocelot, felix_cpu_port_for_master(ds, master)); felix_npi_port_init(ocelot, felix_cpu_port_for_conduit(ds, conduit));
return 0; return 0;
} }
/* Alternatively to using the NPI functionality, that same hardware MAC /* Alternatively to using the NPI functionality, that same hardware MAC
* connected internally to the enetc or fman DSA master can be configured to * connected internally to the enetc or fman DSA conduit can be configured to
* use the software-defined tag_8021q frame format. As far as the hardware is * use the software-defined tag_8021q frame format. As far as the hardware is
* concerned, it thinks it is a "dumb switch" - the queues of the CPU port * concerned, it thinks it is a "dumb switch" - the queues of the CPU port
* module are now disconnected from it, but can still be accessed through * module are now disconnected from it, but can still be accessed through
@ -486,7 +486,7 @@ static const struct felix_tag_proto_ops felix_tag_npi_proto_ops = {
.setup = felix_tag_npi_setup, .setup = felix_tag_npi_setup,
.teardown = felix_tag_npi_teardown, .teardown = felix_tag_npi_teardown,
.get_host_fwd_mask = felix_tag_npi_get_host_fwd_mask, .get_host_fwd_mask = felix_tag_npi_get_host_fwd_mask,
.change_master = felix_tag_npi_change_master, .change_conduit = felix_tag_npi_change_conduit,
}; };
static int felix_tag_8021q_setup(struct dsa_switch *ds) static int felix_tag_8021q_setup(struct dsa_switch *ds)
@ -561,11 +561,11 @@ static unsigned long felix_tag_8021q_get_host_fwd_mask(struct dsa_switch *ds)
return dsa_cpu_ports(ds); return dsa_cpu_ports(ds);
} }
static int felix_tag_8021q_change_master(struct dsa_switch *ds, int port, static int felix_tag_8021q_change_conduit(struct dsa_switch *ds, int port,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
int cpu = felix_cpu_port_for_master(ds, master); int cpu = felix_cpu_port_for_conduit(ds, conduit);
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
ocelot_port_unassign_dsa_8021q_cpu(ocelot, port); ocelot_port_unassign_dsa_8021q_cpu(ocelot, port);
@ -578,7 +578,7 @@ static const struct felix_tag_proto_ops felix_tag_8021q_proto_ops = {
.setup = felix_tag_8021q_setup, .setup = felix_tag_8021q_setup,
.teardown = felix_tag_8021q_teardown, .teardown = felix_tag_8021q_teardown,
.get_host_fwd_mask = felix_tag_8021q_get_host_fwd_mask, .get_host_fwd_mask = felix_tag_8021q_get_host_fwd_mask,
.change_master = felix_tag_8021q_change_master, .change_conduit = felix_tag_8021q_change_conduit,
}; };
static void felix_set_host_flood(struct dsa_switch *ds, unsigned long mask, static void felix_set_host_flood(struct dsa_switch *ds, unsigned long mask,
@ -741,14 +741,14 @@ static void felix_port_set_host_flood(struct dsa_switch *ds, int port,
!!felix->host_flood_mc_mask, true); !!felix->host_flood_mc_mask, true);
} }
static int felix_port_change_master(struct dsa_switch *ds, int port, static int felix_port_change_conduit(struct dsa_switch *ds, int port,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot); struct felix *felix = ocelot_to_felix(ocelot);
return felix->tag_proto_ops->change_master(ds, port, master, extack); return felix->tag_proto_ops->change_conduit(ds, port, conduit, extack);
} }
static int felix_set_ageing_time(struct dsa_switch *ds, static int felix_set_ageing_time(struct dsa_switch *ds,
@ -953,7 +953,7 @@ static int felix_lag_join(struct dsa_switch *ds, int port,
if (!dsa_is_cpu_port(ds, port)) if (!dsa_is_cpu_port(ds, port))
return 0; return 0;
return felix_port_change_master(ds, port, lag.dev, extack); return felix_port_change_conduit(ds, port, lag.dev, extack);
} }
static int felix_lag_leave(struct dsa_switch *ds, int port, static int felix_lag_leave(struct dsa_switch *ds, int port,
@ -967,7 +967,7 @@ static int felix_lag_leave(struct dsa_switch *ds, int port,
if (!dsa_is_cpu_port(ds, port)) if (!dsa_is_cpu_port(ds, port))
return 0; return 0;
return felix_port_change_master(ds, port, lag.dev, NULL); return felix_port_change_conduit(ds, port, lag.dev, NULL);
} }
static int felix_lag_change(struct dsa_switch *ds, int port) static int felix_lag_change(struct dsa_switch *ds, int port)
@ -1116,10 +1116,10 @@ static int felix_port_enable(struct dsa_switch *ds, int port,
return 0; return 0;
if (ocelot->npi >= 0) { if (ocelot->npi >= 0) {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
if (felix_cpu_port_for_master(ds, master) != ocelot->npi) { if (felix_cpu_port_for_conduit(ds, conduit) != ocelot->npi) {
dev_err(ds->dev, "Multiple masters are not allowed\n"); dev_err(ds->dev, "Multiple conduits are not allowed\n");
return -EINVAL; return -EINVAL;
} }
} }
@ -2164,7 +2164,7 @@ const struct dsa_switch_ops felix_switch_ops = {
.port_add_dscp_prio = felix_port_add_dscp_prio, .port_add_dscp_prio = felix_port_add_dscp_prio,
.port_del_dscp_prio = felix_port_del_dscp_prio, .port_del_dscp_prio = felix_port_del_dscp_prio,
.port_set_host_flood = felix_port_set_host_flood, .port_set_host_flood = felix_port_set_host_flood,
.port_change_master = felix_port_change_master, .port_change_conduit = felix_port_change_conduit,
}; };
EXPORT_SYMBOL_GPL(felix_switch_ops); EXPORT_SYMBOL_GPL(felix_switch_ops);
@ -2176,7 +2176,7 @@ struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
if (!dsa_is_user_port(ds, port)) if (!dsa_is_user_port(ds, port))
return NULL; return NULL;
return dsa_to_port(ds, port)->slave; return dsa_to_port(ds, port)->user;
} }
EXPORT_SYMBOL_GPL(felix_port_to_netdev); EXPORT_SYMBOL_GPL(felix_port_to_netdev);

View File

@ -77,9 +77,9 @@ struct felix_tag_proto_ops {
int (*setup)(struct dsa_switch *ds); int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds); void (*teardown)(struct dsa_switch *ds);
unsigned long (*get_host_fwd_mask)(struct dsa_switch *ds); unsigned long (*get_host_fwd_mask)(struct dsa_switch *ds);
int (*change_master)(struct dsa_switch *ds, int port, int (*change_conduit)(struct dsa_switch *ds, int port,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
}; };
extern const struct dsa_switch_ops felix_switch_ops; extern const struct dsa_switch_ops felix_switch_ops;

View File

@ -323,14 +323,14 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
mutex_lock(&mgmt_eth_data->mutex); mutex_lock(&mgmt_eth_data->mutex);
/* Check mgmt_master if is operational */ /* Check if the mgmt_conduit if is operational */
if (!priv->mgmt_master) { if (!priv->mgmt_conduit) {
kfree_skb(skb); kfree_skb(skb);
mutex_unlock(&mgmt_eth_data->mutex); mutex_unlock(&mgmt_eth_data->mutex);
return -EINVAL; return -EINVAL;
} }
skb->dev = priv->mgmt_master; skb->dev = priv->mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done); reinit_completion(&mgmt_eth_data->rw_done);
@ -375,14 +375,14 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
mutex_lock(&mgmt_eth_data->mutex); mutex_lock(&mgmt_eth_data->mutex);
/* Check mgmt_master if is operational */ /* Check if the mgmt_conduit if is operational */
if (!priv->mgmt_master) { if (!priv->mgmt_conduit) {
kfree_skb(skb); kfree_skb(skb);
mutex_unlock(&mgmt_eth_data->mutex); mutex_unlock(&mgmt_eth_data->mutex);
return -EINVAL; return -EINVAL;
} }
skb->dev = priv->mgmt_master; skb->dev = priv->mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done); reinit_completion(&mgmt_eth_data->rw_done);
@ -508,7 +508,7 @@ qca8k_bulk_read(void *ctx, const void *reg_buf, size_t reg_len,
struct qca8k_priv *priv = ctx; struct qca8k_priv *priv = ctx;
u32 reg = *(u16 *)reg_buf; u32 reg = *(u16 *)reg_buf;
if (priv->mgmt_master && if (priv->mgmt_conduit &&
!qca8k_read_eth(priv, reg, val_buf, val_len)) !qca8k_read_eth(priv, reg, val_buf, val_len))
return 0; return 0;
@ -531,7 +531,7 @@ qca8k_bulk_gather_write(void *ctx, const void *reg_buf, size_t reg_len,
u32 reg = *(u16 *)reg_buf; u32 reg = *(u16 *)reg_buf;
u32 *val = (u32 *)val_buf; u32 *val = (u32 *)val_buf;
if (priv->mgmt_master && if (priv->mgmt_conduit &&
!qca8k_write_eth(priv, reg, val, val_len)) !qca8k_write_eth(priv, reg, val, val_len))
return 0; return 0;
@ -626,7 +626,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
struct sk_buff *write_skb, *clear_skb, *read_skb; struct sk_buff *write_skb, *clear_skb, *read_skb;
struct qca8k_mgmt_eth_data *mgmt_eth_data; struct qca8k_mgmt_eth_data *mgmt_eth_data;
u32 write_val, clear_val = 0, val; u32 write_val, clear_val = 0, val;
struct net_device *mgmt_master; struct net_device *mgmt_conduit;
int ret, ret1; int ret, ret1;
bool ack; bool ack;
@ -683,18 +683,18 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
*/ */
mutex_lock(&mgmt_eth_data->mutex); mutex_lock(&mgmt_eth_data->mutex);
/* Check if mgmt_master is operational */ /* Check if mgmt_conduit is operational */
mgmt_master = priv->mgmt_master; mgmt_conduit = priv->mgmt_conduit;
if (!mgmt_master) { if (!mgmt_conduit) {
mutex_unlock(&mgmt_eth_data->mutex); mutex_unlock(&mgmt_eth_data->mutex);
mutex_unlock(&priv->bus->mdio_lock); mutex_unlock(&priv->bus->mdio_lock);
ret = -EINVAL; ret = -EINVAL;
goto err_mgmt_master; goto err_mgmt_conduit;
} }
read_skb->dev = mgmt_master; read_skb->dev = mgmt_conduit;
clear_skb->dev = mgmt_master; clear_skb->dev = mgmt_conduit;
write_skb->dev = mgmt_master; write_skb->dev = mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done); reinit_completion(&mgmt_eth_data->rw_done);
@ -780,7 +780,7 @@ exit:
return ret; return ret;
/* Error handling before lock */ /* Error handling before lock */
err_mgmt_master: err_mgmt_conduit:
kfree_skb(read_skb); kfree_skb(read_skb);
err_read_skb: err_read_skb:
kfree_skb(clear_skb); kfree_skb(clear_skb);
@ -959,12 +959,12 @@ qca8k_mdio_register(struct qca8k_priv *priv)
ds->dst->index, ds->index); ds->dst->index, ds->index);
bus->parent = ds->dev; bus->parent = ds->dev;
bus->phy_mask = ~ds->phys_mii_mask; bus->phy_mask = ~ds->phys_mii_mask;
ds->slave_mii_bus = bus; ds->user_mii_bus = bus;
/* Check if the devicetree declare the port:phy mapping */ /* Check if the devicetree declare the port:phy mapping */
mdio = of_get_child_by_name(priv->dev->of_node, "mdio"); mdio = of_get_child_by_name(priv->dev->of_node, "mdio");
if (of_device_is_available(mdio)) { if (of_device_is_available(mdio)) {
bus->name = "qca8k slave mii"; bus->name = "qca8k user mii";
bus->read = qca8k_internal_mdio_read; bus->read = qca8k_internal_mdio_read;
bus->write = qca8k_internal_mdio_write; bus->write = qca8k_internal_mdio_write;
return devm_of_mdiobus_register(priv->dev, bus, mdio); return devm_of_mdiobus_register(priv->dev, bus, mdio);
@ -973,7 +973,7 @@ qca8k_mdio_register(struct qca8k_priv *priv)
/* If a mapping can't be found the legacy mapping is used, /* If a mapping can't be found the legacy mapping is used,
* using the qca8k_port_to_phy function * using the qca8k_port_to_phy function
*/ */
bus->name = "qca8k-legacy slave mii"; bus->name = "qca8k-legacy user mii";
bus->read = qca8k_legacy_mdio_read; bus->read = qca8k_legacy_mdio_read;
bus->write = qca8k_legacy_mdio_write; bus->write = qca8k_legacy_mdio_write;
return devm_mdiobus_register(priv->dev, bus); return devm_mdiobus_register(priv->dev, bus);
@ -1728,10 +1728,10 @@ qca8k_get_tag_protocol(struct dsa_switch *ds, int port,
} }
static void static void
qca8k_master_change(struct dsa_switch *ds, const struct net_device *master, qca8k_conduit_change(struct dsa_switch *ds, const struct net_device *conduit,
bool operational) bool operational)
{ {
struct dsa_port *dp = master->dsa_ptr; struct dsa_port *dp = conduit->dsa_ptr;
struct qca8k_priv *priv = ds->priv; struct qca8k_priv *priv = ds->priv;
/* Ethernet MIB/MDIO is only supported for CPU port 0 */ /* Ethernet MIB/MDIO is only supported for CPU port 0 */
@ -1741,7 +1741,7 @@ qca8k_master_change(struct dsa_switch *ds, const struct net_device *master,
mutex_lock(&priv->mgmt_eth_data.mutex); mutex_lock(&priv->mgmt_eth_data.mutex);
mutex_lock(&priv->mib_eth_data.mutex); mutex_lock(&priv->mib_eth_data.mutex);
priv->mgmt_master = operational ? (struct net_device *)master : NULL; priv->mgmt_conduit = operational ? (struct net_device *)conduit : NULL;
mutex_unlock(&priv->mib_eth_data.mutex); mutex_unlock(&priv->mib_eth_data.mutex);
mutex_unlock(&priv->mgmt_eth_data.mutex); mutex_unlock(&priv->mgmt_eth_data.mutex);
@ -2016,7 +2016,7 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
.get_phy_flags = qca8k_get_phy_flags, .get_phy_flags = qca8k_get_phy_flags,
.port_lag_join = qca8k_port_lag_join, .port_lag_join = qca8k_port_lag_join,
.port_lag_leave = qca8k_port_lag_leave, .port_lag_leave = qca8k_port_lag_leave,
.master_state_change = qca8k_master_change, .conduit_state_change = qca8k_conduit_change,
.connect_tag_protocol = qca8k_connect_tag_protocol, .connect_tag_protocol = qca8k_connect_tag_protocol,
}; };

View File

@ -499,7 +499,7 @@ void qca8k_get_ethtool_stats(struct dsa_switch *ds, int port,
u32 hi = 0; u32 hi = 0;
int ret; int ret;
if (priv->mgmt_master && priv->info->ops->autocast_mib && if (priv->mgmt_conduit && priv->info->ops->autocast_mib &&
priv->info->ops->autocast_mib(ds, port, data) > 0) priv->info->ops->autocast_mib(ds, port, data) > 0)
return; return;
@ -761,7 +761,7 @@ int qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
int ret; int ret;
/* We have only have a general MTU setting. /* We have only have a general MTU setting.
* DSA always set the CPU port's MTU to the largest MTU of the slave * DSA always set the CPU port's MTU to the largest MTU of the user
* ports. * ports.
* Setting MTU just for the CPU port is sufficient to correctly set a * Setting MTU just for the CPU port is sufficient to correctly set a
* value for every port. * value for every port.

View File

@ -356,8 +356,8 @@ static struct device *qca8k_cled_hw_control_get_device(struct led_classdev *ldev
dp = dsa_to_port(priv->ds, qca8k_phy_to_port(led->port_num)); dp = dsa_to_port(priv->ds, qca8k_phy_to_port(led->port_num));
if (!dp) if (!dp)
return NULL; return NULL;
if (dp->slave) if (dp->user)
return &dp->slave->dev; return &dp->user->dev;
return NULL; return NULL;
} }
@ -429,7 +429,7 @@ qca8k_parse_port_leds(struct qca8k_priv *priv, struct fwnode_handle *port, int p
init_data.default_label = ":port"; init_data.default_label = ":port";
init_data.fwnode = led; init_data.fwnode = led;
init_data.devname_mandatory = true; init_data.devname_mandatory = true;
init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->slave_mii_bus->id, init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->user_mii_bus->id,
port_num); port_num);
if (!init_data.devicename) if (!init_data.devicename)
return -ENOMEM; return -ENOMEM;

View File

@ -458,7 +458,7 @@ struct qca8k_priv {
struct mutex reg_mutex; struct mutex reg_mutex;
struct device *dev; struct device *dev;
struct gpio_desc *reset_gpio; struct gpio_desc *reset_gpio;
struct net_device *mgmt_master; /* Track if mdio/mib Ethernet is available */ struct net_device *mgmt_conduit; /* Track if mdio/mib Ethernet is available */
struct qca8k_mgmt_eth_data mgmt_eth_data; struct qca8k_mgmt_eth_data mgmt_eth_data;
struct qca8k_mib_eth_data mib_eth_data; struct qca8k_mib_eth_data mib_eth_data;
struct qca8k_mdio_cache mdio_cache; struct qca8k_mdio_cache mdio_cache;

View File

@ -378,25 +378,25 @@ static int realtek_smi_setup_mdio(struct dsa_switch *ds)
return -ENODEV; return -ENODEV;
} }
priv->slave_mii_bus = devm_mdiobus_alloc(priv->dev); priv->user_mii_bus = devm_mdiobus_alloc(priv->dev);
if (!priv->slave_mii_bus) { if (!priv->user_mii_bus) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_put_node; goto err_put_node;
} }
priv->slave_mii_bus->priv = priv; priv->user_mii_bus->priv = priv;
priv->slave_mii_bus->name = "SMI slave MII"; priv->user_mii_bus->name = "SMI user MII";
priv->slave_mii_bus->read = realtek_smi_mdio_read; priv->user_mii_bus->read = realtek_smi_mdio_read;
priv->slave_mii_bus->write = realtek_smi_mdio_write; priv->user_mii_bus->write = realtek_smi_mdio_write;
snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d", snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
ds->index); ds->index);
priv->slave_mii_bus->dev.of_node = mdio_np; priv->user_mii_bus->dev.of_node = mdio_np;
priv->slave_mii_bus->parent = priv->dev; priv->user_mii_bus->parent = priv->dev;
ds->slave_mii_bus = priv->slave_mii_bus; ds->user_mii_bus = priv->user_mii_bus;
ret = devm_of_mdiobus_register(priv->dev, priv->slave_mii_bus, mdio_np); ret = devm_of_mdiobus_register(priv->dev, priv->user_mii_bus, mdio_np);
if (ret) { if (ret) {
dev_err(priv->dev, "unable to register MDIO bus %s\n", dev_err(priv->dev, "unable to register MDIO bus %s\n",
priv->slave_mii_bus->id); priv->user_mii_bus->id);
goto err_put_node; goto err_put_node;
} }
@ -514,8 +514,8 @@ static void realtek_smi_remove(struct platform_device *pdev)
return; return;
dsa_unregister_switch(priv->ds); dsa_unregister_switch(priv->ds);
if (priv->slave_mii_bus) if (priv->user_mii_bus)
of_node_put(priv->slave_mii_bus->dev.of_node); of_node_put(priv->user_mii_bus->dev.of_node);
/* leave the device reset asserted */ /* leave the device reset asserted */
if (priv->reset) if (priv->reset)

View File

@ -54,7 +54,7 @@ struct realtek_priv {
struct regmap *map; struct regmap *map;
struct regmap *map_nolock; struct regmap *map_nolock;
struct mutex map_lock; struct mutex map_lock;
struct mii_bus *slave_mii_bus; struct mii_bus *user_mii_bus;
struct mii_bus *bus; struct mii_bus *bus;
int mdio_addr; int mdio_addr;

View File

@ -1144,7 +1144,7 @@ static int rtl8365mb_port_change_mtu(struct dsa_switch *ds, int port,
int frame_size; int frame_size;
/* When a new MTU is set, DSA always sets the CPU port's MTU to the /* When a new MTU is set, DSA always sets the CPU port's MTU to the
* largest MTU of the slave ports. Because the switch only has a global * largest MTU of the user ports. Because the switch only has a global
* RX length register, only allowing CPU port here is enough. * RX length register, only allowing CPU port here is enough.
*/ */
if (!dsa_is_cpu_port(ds, port)) if (!dsa_is_cpu_port(ds, port))

View File

@ -2688,7 +2688,7 @@ static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
} }
/* Transfer skb to the host port. */ /* Transfer skb to the host port. */
dsa_enqueue_skb(skb, dsa_to_port(ds, port)->slave); dsa_enqueue_skb(skb, dsa_to_port(ds, port)->user);
/* Wait until the switch has processed the frame */ /* Wait until the switch has processed the frame */
do { do {
@ -3081,7 +3081,7 @@ static int sja1105_port_bridge_flags(struct dsa_switch *ds, int port,
* ref_clk pin. So port clocking needs to be initialized early, before * ref_clk pin. So port clocking needs to be initialized early, before
* connecting to PHYs is attempted, otherwise they won't respond through MDIO. * connecting to PHYs is attempted, otherwise they won't respond through MDIO.
* Setting correct PHY link speed does not matter now. * Setting correct PHY link speed does not matter now.
* But dsa_slave_phy_setup is called later than sja1105_setup, so the PHY * But dsa_user_phy_setup is called later than sja1105_setup, so the PHY
* bindings are not yet parsed by DSA core. We need to parse early so that we * bindings are not yet parsed by DSA core. We need to parse early so that we
* can populate the xMII mode parameters table. * can populate the xMII mode parameters table.
*/ */

View File

@ -554,7 +554,7 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
unsigned int val = XRS_HSR_CFG_HSR_PRP; unsigned int val = XRS_HSR_CFG_HSR_PRP;
struct dsa_port *partner = NULL, *dp; struct dsa_port *partner = NULL, *dp;
struct xrs700x *priv = ds->priv; struct xrs700x *priv = ds->priv;
struct net_device *slave; struct net_device *user;
int ret, i, hsr_pair[2]; int ret, i, hsr_pair[2];
enum hsr_version ver; enum hsr_version ver;
bool fwd = false; bool fwd = false;
@ -638,8 +638,8 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
hsr_pair[0] = port; hsr_pair[0] = port;
hsr_pair[1] = partner->index; hsr_pair[1] = partner->index;
for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) { for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) {
slave = dsa_to_port(ds, hsr_pair[i])->slave; user = dsa_to_port(ds, hsr_pair[i])->user;
slave->features |= XRS7000X_SUPPORTED_HSR_FEATURES; user->features |= XRS7000X_SUPPORTED_HSR_FEATURES;
} }
return 0; return 0;
@ -650,7 +650,7 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
{ {
struct dsa_port *partner = NULL, *dp; struct dsa_port *partner = NULL, *dp;
struct xrs700x *priv = ds->priv; struct xrs700x *priv = ds->priv;
struct net_device *slave; struct net_device *user;
int i, hsr_pair[2]; int i, hsr_pair[2];
unsigned int val; unsigned int val;
@ -692,8 +692,8 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
hsr_pair[0] = port; hsr_pair[0] = port;
hsr_pair[1] = partner->index; hsr_pair[1] = partner->index;
for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) { for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) {
slave = dsa_to_port(ds, hsr_pair[i])->slave; user = dsa_to_port(ds, hsr_pair[i])->user;
slave->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES; user->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES;
} }
return 0; return 0;

View File

@ -2430,7 +2430,7 @@ static int bcm_sysport_netdevice_event(struct notifier_block *nb,
if (dev->netdev_ops != &bcm_sysport_netdev_ops) if (dev->netdev_ops != &bcm_sysport_netdev_ops)
return NOTIFY_DONE; return NOTIFY_DONE;
if (!dsa_slave_dev_check(info->upper_dev)) if (!dsa_user_dev_check(info->upper_dev))
return NOTIFY_DONE; return NOTIFY_DONE;
if (info->linking) if (info->linking)

View File

@ -3329,7 +3329,7 @@ static int mtk_device_event(struct notifier_block *n, unsigned long event, void
return NOTIFY_DONE; return NOTIFY_DONE;
found: found:
if (!dsa_slave_dev_check(dev)) if (!dsa_user_dev_check(dev))
return NOTIFY_DONE; return NOTIFY_DONE;
if (__ethtool_get_link_ksettings(dev, &s)) if (__ethtool_get_link_ksettings(dev, &s))

View File

@ -175,7 +175,7 @@ mtk_flow_get_dsa_port(struct net_device **dev)
if (dp->cpu_dp->tag_ops->proto != DSA_TAG_PROTO_MTK) if (dp->cpu_dp->tag_ops->proto != DSA_TAG_PROTO_MTK)
return -ENODEV; return -ENODEV;
*dev = dsa_port_to_master(dp); *dev = dsa_port_to_conduit(dp);
return dp->index; return dp->index;
#else #else

View File

@ -28,7 +28,7 @@
/* Source and Destination MAC of follow-up meta frames. /* Source and Destination MAC of follow-up meta frames.
* Whereas the choice of SMAC only affects the unique identification of the * Whereas the choice of SMAC only affects the unique identification of the
* switch as sender of meta frames, the DMAC must be an address that is present * switch as sender of meta frames, the DMAC must be an address that is present
* in the DSA master port's multicast MAC filter. * in the DSA conduit port's multicast MAC filter.
* 01-80-C2-00-00-0E is a good choice for this, as all profiles of IEEE 1588 * 01-80-C2-00-00-0E is a good choice for this, as all profiles of IEEE 1588
* over L2 use this address for some purpose already. * over L2 use this address for some purpose already.
*/ */

View File

@ -102,11 +102,11 @@ struct dsa_device_ops {
const char *name; const char *name;
enum dsa_tag_protocol proto; enum dsa_tag_protocol proto;
/* Some tagging protocols either mangle or shift the destination MAC /* Some tagging protocols either mangle or shift the destination MAC
* address, in which case the DSA master would drop packets on ingress * address, in which case the DSA conduit would drop packets on ingress
* if what it understands out of the destination MAC address is not in * if what it understands out of the destination MAC address is not in
* its RX filter. * its RX filter.
*/ */
bool promisc_on_master; bool promisc_on_conduit;
}; };
struct dsa_lag { struct dsa_lag {
@ -236,12 +236,12 @@ struct dsa_bridge {
}; };
struct dsa_port { struct dsa_port {
/* A CPU port is physically connected to a master device. /* A CPU port is physically connected to a conduit device. A user port
* A user port exposed to userspace has a slave device. * exposes a network device to user-space, called 'user' here.
*/ */
union { union {
struct net_device *master; struct net_device *conduit;
struct net_device *slave; struct net_device *user;
}; };
/* Copy of the tagging protocol operations, for quicker access /* Copy of the tagging protocol operations, for quicker access
@ -249,7 +249,7 @@ struct dsa_port {
*/ */
const struct dsa_device_ops *tag_ops; const struct dsa_device_ops *tag_ops;
/* Copies for faster access in master receive hot path */ /* Copies for faster access in conduit receive hot path */
struct dsa_switch_tree *dst; struct dsa_switch_tree *dst;
struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev); struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev);
@ -281,9 +281,9 @@ struct dsa_port {
u8 lag_tx_enabled:1; u8 lag_tx_enabled:1;
/* Master state bits, valid only on CPU ports */ /* conduit state bits, valid only on CPU ports */
u8 master_admin_up:1; u8 conduit_admin_up:1;
u8 master_oper_up:1; u8 conduit_oper_up:1;
/* Valid only on user ports */ /* Valid only on user ports */
u8 cpu_port_in_lag:1; u8 cpu_port_in_lag:1;
@ -303,7 +303,7 @@ struct dsa_port {
struct list_head list; struct list_head list;
/* /*
* Original copy of the master netdev ethtool_ops * Original copy of the conduit netdev ethtool_ops
*/ */
const struct ethtool_ops *orig_ethtool_ops; const struct ethtool_ops *orig_ethtool_ops;
@ -452,10 +452,10 @@ struct dsa_switch {
const struct dsa_switch_ops *ops; const struct dsa_switch_ops *ops;
/* /*
* Slave mii_bus and devices for the individual ports. * User mii_bus and devices for the individual ports.
*/ */
u32 phys_mii_mask; u32 phys_mii_mask;
struct mii_bus *slave_mii_bus; struct mii_bus *user_mii_bus;
/* Ageing Time limits in msecs */ /* Ageing Time limits in msecs */
unsigned int ageing_time_min; unsigned int ageing_time_min;
@ -520,10 +520,10 @@ static inline bool dsa_port_is_unused(struct dsa_port *dp)
return dp->type == DSA_PORT_TYPE_UNUSED; return dp->type == DSA_PORT_TYPE_UNUSED;
} }
static inline bool dsa_port_master_is_operational(struct dsa_port *dp) static inline bool dsa_port_conduit_is_operational(struct dsa_port *dp)
{ {
return dsa_port_is_cpu(dp) && dp->master_admin_up && return dsa_port_is_cpu(dp) && dp->conduit_admin_up &&
dp->master_oper_up; dp->conduit_oper_up;
} }
static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p) static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p)
@ -713,12 +713,12 @@ static inline bool dsa_port_offloads_lag(struct dsa_port *dp,
return dsa_port_lag_dev_get(dp) == lag->dev; return dsa_port_lag_dev_get(dp) == lag->dev;
} }
static inline struct net_device *dsa_port_to_master(const struct dsa_port *dp) static inline struct net_device *dsa_port_to_conduit(const struct dsa_port *dp)
{ {
if (dp->cpu_port_in_lag) if (dp->cpu_port_in_lag)
return dsa_port_lag_dev_get(dp->cpu_dp); return dsa_port_lag_dev_get(dp->cpu_dp);
return dp->cpu_dp->master; return dp->cpu_dp->conduit;
} }
static inline static inline
@ -732,7 +732,7 @@ struct net_device *dsa_port_to_bridge_port(const struct dsa_port *dp)
else if (dp->hsr_dev) else if (dp->hsr_dev)
return dp->hsr_dev; return dp->hsr_dev;
return dp->slave; return dp->user;
} }
static inline struct net_device * static inline struct net_device *
@ -834,9 +834,9 @@ struct dsa_switch_ops {
int (*connect_tag_protocol)(struct dsa_switch *ds, int (*connect_tag_protocol)(struct dsa_switch *ds,
enum dsa_tag_protocol proto); enum dsa_tag_protocol proto);
int (*port_change_master)(struct dsa_switch *ds, int port, int (*port_change_conduit)(struct dsa_switch *ds, int port,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
/* Optional switch-wide initialization and destruction methods */ /* Optional switch-wide initialization and destruction methods */
int (*setup)(struct dsa_switch *ds); int (*setup)(struct dsa_switch *ds);
@ -1233,11 +1233,11 @@ struct dsa_switch_ops {
int (*tag_8021q_vlan_del)(struct dsa_switch *ds, int port, u16 vid); int (*tag_8021q_vlan_del)(struct dsa_switch *ds, int port, u16 vid);
/* /*
* DSA master tracking operations * DSA conduit tracking operations
*/ */
void (*master_state_change)(struct dsa_switch *ds, void (*conduit_state_change)(struct dsa_switch *ds,
const struct net_device *master, const struct net_device *conduit,
bool operational); bool operational);
}; };
#define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \ #define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \
@ -1374,9 +1374,9 @@ static inline int dsa_switch_resume(struct dsa_switch *ds)
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#if IS_ENABLED(CONFIG_NET_DSA) #if IS_ENABLED(CONFIG_NET_DSA)
bool dsa_slave_dev_check(const struct net_device *dev); bool dsa_user_dev_check(const struct net_device *dev);
#else #else
static inline bool dsa_slave_dev_check(const struct net_device *dev) static inline bool dsa_user_dev_check(const struct net_device *dev)
{ {
return false; return false;
} }

View File

@ -13,14 +13,14 @@
extern const struct dsa_stubs *dsa_stubs; extern const struct dsa_stubs *dsa_stubs;
struct dsa_stubs { struct dsa_stubs {
int (*master_hwtstamp_validate)(struct net_device *dev, int (*conduit_hwtstamp_validate)(struct net_device *dev,
const struct kernel_hwtstamp_config *config, const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
}; };
static inline int dsa_master_hwtstamp_validate(struct net_device *dev, static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev,
const struct kernel_hwtstamp_config *config, const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
if (!netdev_uses_dsa(dev)) if (!netdev_uses_dsa(dev))
return 0; return 0;
@ -29,18 +29,18 @@ static inline int dsa_master_hwtstamp_validate(struct net_device *dev,
* netdev_uses_dsa() returns true, the dsa_core module is still * netdev_uses_dsa() returns true, the dsa_core module is still
* registered, and so, dsa_unregister_stubs() couldn't have run. * registered, and so, dsa_unregister_stubs() couldn't have run.
* For netdev_uses_dsa() to start returning false, it would imply that * For netdev_uses_dsa() to start returning false, it would imply that
* dsa_master_teardown() has executed, which requires rtnl_lock(). * dsa_conduit_teardown() has executed, which requires rtnl_lock().
*/ */
ASSERT_RTNL(); ASSERT_RTNL();
return dsa_stubs->master_hwtstamp_validate(dev, config, extack); return dsa_stubs->conduit_hwtstamp_validate(dev, config, extack);
} }
#else #else
static inline int dsa_master_hwtstamp_validate(struct net_device *dev, static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev,
const struct kernel_hwtstamp_config *config, const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
return 0; return 0;
} }

View File

@ -382,7 +382,7 @@ static int dev_set_hwtstamp(struct net_device *dev, struct ifreq *ifr)
if (err) if (err)
return err; return err;
err = dsa_master_hwtstamp_validate(dev, &kernel_cfg, &extack); err = dsa_conduit_hwtstamp_validate(dev, &kernel_cfg, &extack);
if (err) { if (err) {
if (extack._msg) if (extack._msg)
netdev_err(dev, "%s\n", extack._msg); netdev_err(dev, "%s\n", extack._msg);

View File

@ -8,16 +8,16 @@ endif
# the core # the core
obj-$(CONFIG_NET_DSA) += dsa_core.o obj-$(CONFIG_NET_DSA) += dsa_core.o
dsa_core-y += \ dsa_core-y += \
conduit.o \
devlink.o \ devlink.o \
dsa.o \ dsa.o \
master.o \
netlink.o \ netlink.o \
port.o \ port.o \
slave.o \
switch.o \ switch.o \
tag.o \ tag.o \
tag_8021q.o \ tag_8021q.o \
trace.o trace.o \
user.o
# tagging formats # tagging formats
obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later // SPDX-License-Identifier: GPL-2.0-or-later
/* /*
* Handling of a master device, switching frames via its switch fabric CPU port * Handling of a conduit device, switching frames via its switch fabric CPU port
* *
* Copyright (c) 2017 Savoir-faire Linux Inc. * Copyright (c) 2017 Savoir-faire Linux Inc.
* Vivien Didelot <vivien.didelot@savoirfairelinux.com> * Vivien Didelot <vivien.didelot@savoirfairelinux.com>
@ -11,12 +11,12 @@
#include <linux/netlink.h> #include <linux/netlink.h>
#include <net/dsa.h> #include <net/dsa.h>
#include "conduit.h"
#include "dsa.h" #include "dsa.h"
#include "master.h"
#include "port.h" #include "port.h"
#include "tag.h" #include "tag.h"
static int dsa_master_get_regs_len(struct net_device *dev) static int dsa_conduit_get_regs_len(struct net_device *dev)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -45,8 +45,8 @@ static int dsa_master_get_regs_len(struct net_device *dev)
return ret; return ret;
} }
static void dsa_master_get_regs(struct net_device *dev, static void dsa_conduit_get_regs(struct net_device *dev,
struct ethtool_regs *regs, void *data) struct ethtool_regs *regs, void *data)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -80,9 +80,9 @@ static void dsa_master_get_regs(struct net_device *dev,
} }
} }
static void dsa_master_get_ethtool_stats(struct net_device *dev, static void dsa_conduit_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, struct ethtool_stats *stats,
uint64_t *data) uint64_t *data)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -99,9 +99,9 @@ static void dsa_master_get_ethtool_stats(struct net_device *dev,
ds->ops->get_ethtool_stats(ds, port, data + count); ds->ops->get_ethtool_stats(ds, port, data + count);
} }
static void dsa_master_get_ethtool_phy_stats(struct net_device *dev, static void dsa_conduit_get_ethtool_phy_stats(struct net_device *dev,
struct ethtool_stats *stats, struct ethtool_stats *stats,
uint64_t *data) uint64_t *data)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -125,7 +125,7 @@ static void dsa_master_get_ethtool_phy_stats(struct net_device *dev,
ds->ops->get_ethtool_phy_stats(ds, port, data + count); ds->ops->get_ethtool_phy_stats(ds, port, data + count);
} }
static int dsa_master_get_sset_count(struct net_device *dev, int sset) static int dsa_conduit_get_sset_count(struct net_device *dev, int sset)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -147,8 +147,8 @@ static int dsa_master_get_sset_count(struct net_device *dev, int sset)
return count; return count;
} }
static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset, static void dsa_conduit_get_strings(struct net_device *dev, uint32_t stringset,
uint8_t *data) uint8_t *data)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@ -195,12 +195,12 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
} }
} }
/* Deny PTP operations on master if there is at least one switch in the tree /* Deny PTP operations on conduit if there is at least one switch in the tree
* that is PTP capable. * that is PTP capable.
*/ */
int __dsa_master_hwtstamp_validate(struct net_device *dev, int __dsa_conduit_hwtstamp_validate(struct net_device *dev,
const struct kernel_hwtstamp_config *config, const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch *ds = cpu_dp->ds; struct dsa_switch *ds = cpu_dp->ds;
@ -212,7 +212,7 @@ int __dsa_master_hwtstamp_validate(struct net_device *dev,
list_for_each_entry(dp, &dst->ports, list) { list_for_each_entry(dp, &dst->ports, list) {
if (dsa_port_supports_hwtstamp(dp)) { if (dsa_port_supports_hwtstamp(dp)) {
NL_SET_ERR_MSG(extack, NL_SET_ERR_MSG(extack,
"HW timestamping not allowed on DSA master when switch supports the operation"); "HW timestamping not allowed on DSA conduit when switch supports the operation");
return -EBUSY; return -EBUSY;
} }
} }
@ -220,7 +220,7 @@ int __dsa_master_hwtstamp_validate(struct net_device *dev,
return 0; return 0;
} }
static int dsa_master_ethtool_setup(struct net_device *dev) static int dsa_conduit_ethtool_setup(struct net_device *dev)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch *ds = cpu_dp->ds; struct dsa_switch *ds = cpu_dp->ds;
@ -237,19 +237,19 @@ static int dsa_master_ethtool_setup(struct net_device *dev)
if (cpu_dp->orig_ethtool_ops) if (cpu_dp->orig_ethtool_ops)
memcpy(ops, cpu_dp->orig_ethtool_ops, sizeof(*ops)); memcpy(ops, cpu_dp->orig_ethtool_ops, sizeof(*ops));
ops->get_regs_len = dsa_master_get_regs_len; ops->get_regs_len = dsa_conduit_get_regs_len;
ops->get_regs = dsa_master_get_regs; ops->get_regs = dsa_conduit_get_regs;
ops->get_sset_count = dsa_master_get_sset_count; ops->get_sset_count = dsa_conduit_get_sset_count;
ops->get_ethtool_stats = dsa_master_get_ethtool_stats; ops->get_ethtool_stats = dsa_conduit_get_ethtool_stats;
ops->get_strings = dsa_master_get_strings; ops->get_strings = dsa_conduit_get_strings;
ops->get_ethtool_phy_stats = dsa_master_get_ethtool_phy_stats; ops->get_ethtool_phy_stats = dsa_conduit_get_ethtool_phy_stats;
dev->ethtool_ops = ops; dev->ethtool_ops = ops;
return 0; return 0;
} }
static void dsa_master_ethtool_teardown(struct net_device *dev) static void dsa_conduit_ethtool_teardown(struct net_device *dev)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
@ -260,16 +260,16 @@ static void dsa_master_ethtool_teardown(struct net_device *dev)
cpu_dp->orig_ethtool_ops = NULL; cpu_dp->orig_ethtool_ops = NULL;
} }
/* Keep the master always promiscuous if the tagging protocol requires that /* Keep the conduit always promiscuous if the tagging protocol requires that
* (garbles MAC DA) or if it doesn't support unicast filtering, case in which * (garbles MAC DA) or if it doesn't support unicast filtering, case in which
* it would revert to promiscuous mode as soon as we call dev_uc_add() on it * it would revert to promiscuous mode as soon as we call dev_uc_add() on it
* anyway. * anyway.
*/ */
static void dsa_master_set_promiscuity(struct net_device *dev, int inc) static void dsa_conduit_set_promiscuity(struct net_device *dev, int inc)
{ {
const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops; const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops;
if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master) if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_conduit)
return; return;
ASSERT_RTNL(); ASSERT_RTNL();
@ -336,17 +336,17 @@ out:
} }
static DEVICE_ATTR_RW(tagging); static DEVICE_ATTR_RW(tagging);
static struct attribute *dsa_slave_attrs[] = { static struct attribute *dsa_user_attrs[] = {
&dev_attr_tagging.attr, &dev_attr_tagging.attr,
NULL NULL
}; };
static const struct attribute_group dsa_group = { static const struct attribute_group dsa_group = {
.name = "dsa", .name = "dsa",
.attrs = dsa_slave_attrs, .attrs = dsa_user_attrs,
}; };
static void dsa_master_reset_mtu(struct net_device *dev) static void dsa_conduit_reset_mtu(struct net_device *dev)
{ {
int err; int err;
@ -356,7 +356,7 @@ static void dsa_master_reset_mtu(struct net_device *dev)
"Unable to reset MTU to exclude DSA overheads\n"); "Unable to reset MTU to exclude DSA overheads\n");
} }
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp)
{ {
const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops; const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops;
struct dsa_switch *ds = cpu_dp->ds; struct dsa_switch *ds = cpu_dp->ds;
@ -365,7 +365,7 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops); mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops);
/* The DSA master must use SET_NETDEV_DEV for this to work. */ /* The DSA conduit must use SET_NETDEV_DEV for this to work. */
if (!netif_is_lag_master(dev)) { if (!netif_is_lag_master(dev)) {
consumer_link = device_link_add(ds->dev, dev->dev.parent, consumer_link = device_link_add(ds->dev, dev->dev.parent,
DL_FLAG_AUTOREMOVE_CONSUMER); DL_FLAG_AUTOREMOVE_CONSUMER);
@ -376,7 +376,7 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
} }
/* The switch driver may not implement ->port_change_mtu(), case in /* The switch driver may not implement ->port_change_mtu(), case in
* which dsa_slave_change_mtu() will not update the master MTU either, * which dsa_user_change_mtu() will not update the conduit MTU either,
* so we need to do that here. * so we need to do that here.
*/ */
ret = dev_set_mtu(dev, mtu); ret = dev_set_mtu(dev, mtu);
@ -392,9 +392,9 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
dev->dsa_ptr = cpu_dp; dev->dsa_ptr = cpu_dp;
dsa_master_set_promiscuity(dev, 1); dsa_conduit_set_promiscuity(dev, 1);
ret = dsa_master_ethtool_setup(dev); ret = dsa_conduit_ethtool_setup(dev);
if (ret) if (ret)
goto out_err_reset_promisc; goto out_err_reset_promisc;
@ -405,18 +405,18 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
return ret; return ret;
out_err_ethtool_teardown: out_err_ethtool_teardown:
dsa_master_ethtool_teardown(dev); dsa_conduit_ethtool_teardown(dev);
out_err_reset_promisc: out_err_reset_promisc:
dsa_master_set_promiscuity(dev, -1); dsa_conduit_set_promiscuity(dev, -1);
return ret; return ret;
} }
void dsa_master_teardown(struct net_device *dev) void dsa_conduit_teardown(struct net_device *dev)
{ {
sysfs_remove_group(&dev->dev.kobj, &dsa_group); sysfs_remove_group(&dev->dev.kobj, &dsa_group);
dsa_master_ethtool_teardown(dev); dsa_conduit_ethtool_teardown(dev);
dsa_master_reset_mtu(dev); dsa_conduit_reset_mtu(dev);
dsa_master_set_promiscuity(dev, -1); dsa_conduit_set_promiscuity(dev, -1);
dev->dsa_ptr = NULL; dev->dsa_ptr = NULL;
@ -427,40 +427,40 @@ void dsa_master_teardown(struct net_device *dev)
wmb(); wmb();
} }
int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp, int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
struct netdev_lag_upper_info *uinfo, struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
bool master_setup = false; bool conduit_setup = false;
int err; int err;
if (!netdev_uses_dsa(lag_dev)) { if (!netdev_uses_dsa(lag_dev)) {
err = dsa_master_setup(lag_dev, cpu_dp); err = dsa_conduit_setup(lag_dev, cpu_dp);
if (err) if (err)
return err; return err;
master_setup = true; conduit_setup = true;
} }
err = dsa_port_lag_join(cpu_dp, lag_dev, uinfo, extack); err = dsa_port_lag_join(cpu_dp, lag_dev, uinfo, extack);
if (err) { if (err) {
NL_SET_ERR_MSG_WEAK_MOD(extack, "CPU port failed to join LAG"); NL_SET_ERR_MSG_WEAK_MOD(extack, "CPU port failed to join LAG");
goto out_master_teardown; goto out_conduit_teardown;
} }
return 0; return 0;
out_master_teardown: out_conduit_teardown:
if (master_setup) if (conduit_setup)
dsa_master_teardown(lag_dev); dsa_conduit_teardown(lag_dev);
return err; return err;
} }
/* Tear down a master if there isn't any other user port on it, /* Tear down a conduit if there isn't any other user port on it,
* optionally also destroying LAG information. * optionally also destroying LAG information.
*/ */
void dsa_master_lag_teardown(struct net_device *lag_dev, void dsa_conduit_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp) struct dsa_port *cpu_dp)
{ {
struct net_device *upper; struct net_device *upper;
struct list_head *iter; struct list_head *iter;
@ -468,8 +468,8 @@ void dsa_master_lag_teardown(struct net_device *lag_dev,
dsa_port_lag_leave(cpu_dp, lag_dev); dsa_port_lag_leave(cpu_dp, lag_dev);
netdev_for_each_upper_dev_rcu(lag_dev, upper, iter) netdev_for_each_upper_dev_rcu(lag_dev, upper, iter)
if (dsa_slave_dev_check(upper)) if (dsa_user_dev_check(upper))
return; return;
dsa_master_teardown(lag_dev); dsa_conduit_teardown(lag_dev);
} }

22
net/dsa/conduit.h Normal file
View File

@ -0,0 +1,22 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef __DSA_CONDUIT_H
#define __DSA_CONDUIT_H
struct dsa_port;
struct net_device;
struct netdev_lag_upper_info;
struct netlink_ext_ack;
int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp);
void dsa_conduit_teardown(struct net_device *dev);
int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack);
void dsa_conduit_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp);
int __dsa_conduit_hwtstamp_validate(struct net_device *dev,
const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack);
#endif

View File

@ -20,14 +20,14 @@
#include <net/dsa_stubs.h> #include <net/dsa_stubs.h>
#include <net/sch_generic.h> #include <net/sch_generic.h>
#include "conduit.h"
#include "devlink.h" #include "devlink.h"
#include "dsa.h" #include "dsa.h"
#include "master.h"
#include "netlink.h" #include "netlink.h"
#include "port.h" #include "port.h"
#include "slave.h"
#include "switch.h" #include "switch.h"
#include "tag.h" #include "tag.h"
#include "user.h"
#define DSA_MAX_NUM_OFFLOADING_BRIDGES BITS_PER_LONG #define DSA_MAX_NUM_OFFLOADING_BRIDGES BITS_PER_LONG
@ -365,18 +365,18 @@ static struct dsa_port *dsa_tree_find_first_cpu(struct dsa_switch_tree *dst)
return NULL; return NULL;
} }
struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst) struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst)
{ {
struct device_node *ethernet; struct device_node *ethernet;
struct net_device *master; struct net_device *conduit;
struct dsa_port *cpu_dp; struct dsa_port *cpu_dp;
cpu_dp = dsa_tree_find_first_cpu(dst); cpu_dp = dsa_tree_find_first_cpu(dst);
ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0); ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0);
master = of_find_net_device_by_node(ethernet); conduit = of_find_net_device_by_node(ethernet);
of_node_put(ethernet); of_node_put(ethernet);
return master; return conduit;
} }
/* Assign the default CPU port (the first one in the tree) to all ports of the /* Assign the default CPU port (the first one in the tree) to all ports of the
@ -517,7 +517,7 @@ static int dsa_port_setup(struct dsa_port *dp)
break; break;
case DSA_PORT_TYPE_USER: case DSA_PORT_TYPE_USER:
of_get_mac_address(dp->dn, dp->mac); of_get_mac_address(dp->dn, dp->mac);
err = dsa_slave_create(dp); err = dsa_user_create(dp);
break; break;
} }
@ -554,9 +554,9 @@ static void dsa_port_teardown(struct dsa_port *dp)
dsa_shared_port_link_unregister_of(dp); dsa_shared_port_link_unregister_of(dp);
break; break;
case DSA_PORT_TYPE_USER: case DSA_PORT_TYPE_USER:
if (dp->slave) { if (dp->user) {
dsa_slave_destroy(dp->slave); dsa_user_destroy(dp->user);
dp->slave = NULL; dp->user = NULL;
} }
break; break;
} }
@ -632,9 +632,9 @@ static int dsa_switch_setup(struct dsa_switch *ds)
if (ds->setup) if (ds->setup)
return 0; return 0;
/* Initialize ds->phys_mii_mask before registering the slave MDIO bus /* Initialize ds->phys_mii_mask before registering the user MDIO bus
* driver and before ops->setup() has run, since the switch drivers and * driver and before ops->setup() has run, since the switch drivers and
* the slave MDIO bus driver rely on these values for probing PHY * the user MDIO bus driver rely on these values for probing PHY
* devices or not * devices or not
*/ */
ds->phys_mii_mask |= dsa_user_ports(ds); ds->phys_mii_mask |= dsa_user_ports(ds);
@ -657,21 +657,21 @@ static int dsa_switch_setup(struct dsa_switch *ds)
if (err) if (err)
goto teardown; goto teardown;
if (!ds->slave_mii_bus && ds->ops->phy_read) { if (!ds->user_mii_bus && ds->ops->phy_read) {
ds->slave_mii_bus = mdiobus_alloc(); ds->user_mii_bus = mdiobus_alloc();
if (!ds->slave_mii_bus) { if (!ds->user_mii_bus) {
err = -ENOMEM; err = -ENOMEM;
goto teardown; goto teardown;
} }
dsa_slave_mii_bus_init(ds); dsa_user_mii_bus_init(ds);
dn = of_get_child_by_name(ds->dev->of_node, "mdio"); dn = of_get_child_by_name(ds->dev->of_node, "mdio");
err = of_mdiobus_register(ds->slave_mii_bus, dn); err = of_mdiobus_register(ds->user_mii_bus, dn);
of_node_put(dn); of_node_put(dn);
if (err < 0) if (err < 0)
goto free_slave_mii_bus; goto free_user_mii_bus;
} }
dsa_switch_devlink_register(ds); dsa_switch_devlink_register(ds);
@ -679,9 +679,9 @@ static int dsa_switch_setup(struct dsa_switch *ds)
ds->setup = true; ds->setup = true;
return 0; return 0;
free_slave_mii_bus: free_user_mii_bus:
if (ds->slave_mii_bus && ds->ops->phy_read) if (ds->user_mii_bus && ds->ops->phy_read)
mdiobus_free(ds->slave_mii_bus); mdiobus_free(ds->user_mii_bus);
teardown: teardown:
if (ds->ops->teardown) if (ds->ops->teardown)
ds->ops->teardown(ds); ds->ops->teardown(ds);
@ -699,10 +699,10 @@ static void dsa_switch_teardown(struct dsa_switch *ds)
dsa_switch_devlink_unregister(ds); dsa_switch_devlink_unregister(ds);
if (ds->slave_mii_bus && ds->ops->phy_read) { if (ds->user_mii_bus && ds->ops->phy_read) {
mdiobus_unregister(ds->slave_mii_bus); mdiobus_unregister(ds->user_mii_bus);
mdiobus_free(ds->slave_mii_bus); mdiobus_free(ds->user_mii_bus);
ds->slave_mii_bus = NULL; ds->user_mii_bus = NULL;
} }
dsa_switch_teardown_tag_protocol(ds); dsa_switch_teardown_tag_protocol(ds);
@ -793,7 +793,7 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
return err; return err;
} }
static int dsa_tree_setup_master(struct dsa_switch_tree *dst) static int dsa_tree_setup_conduit(struct dsa_switch_tree *dst)
{ {
struct dsa_port *cpu_dp; struct dsa_port *cpu_dp;
int err = 0; int err = 0;
@ -801,18 +801,18 @@ static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
rtnl_lock(); rtnl_lock();
dsa_tree_for_each_cpu_port(cpu_dp, dst) { dsa_tree_for_each_cpu_port(cpu_dp, dst) {
struct net_device *master = cpu_dp->master; struct net_device *conduit = cpu_dp->conduit;
bool admin_up = (master->flags & IFF_UP) && bool admin_up = (conduit->flags & IFF_UP) &&
!qdisc_tx_is_noop(master); !qdisc_tx_is_noop(conduit);
err = dsa_master_setup(master, cpu_dp); err = dsa_conduit_setup(conduit, cpu_dp);
if (err) if (err)
break; break;
/* Replay master state event */ /* Replay conduit state event */
dsa_tree_master_admin_state_change(dst, master, admin_up); dsa_tree_conduit_admin_state_change(dst, conduit, admin_up);
dsa_tree_master_oper_state_change(dst, master, dsa_tree_conduit_oper_state_change(dst, conduit,
netif_oper_up(master)); netif_oper_up(conduit));
} }
rtnl_unlock(); rtnl_unlock();
@ -820,22 +820,22 @@ static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
return err; return err;
} }
static void dsa_tree_teardown_master(struct dsa_switch_tree *dst) static void dsa_tree_teardown_conduit(struct dsa_switch_tree *dst)
{ {
struct dsa_port *cpu_dp; struct dsa_port *cpu_dp;
rtnl_lock(); rtnl_lock();
dsa_tree_for_each_cpu_port(cpu_dp, dst) { dsa_tree_for_each_cpu_port(cpu_dp, dst) {
struct net_device *master = cpu_dp->master; struct net_device *conduit = cpu_dp->conduit;
/* Synthesizing an "admin down" state is sufficient for /* Synthesizing an "admin down" state is sufficient for
* the switches to get a notification if the master is * the switches to get a notification if the conduit is
* currently up and running. * currently up and running.
*/ */
dsa_tree_master_admin_state_change(dst, master, false); dsa_tree_conduit_admin_state_change(dst, conduit, false);
dsa_master_teardown(master); dsa_conduit_teardown(conduit);
} }
rtnl_unlock(); rtnl_unlock();
@ -894,13 +894,13 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
if (err) if (err)
goto teardown_switches; goto teardown_switches;
err = dsa_tree_setup_master(dst); err = dsa_tree_setup_conduit(dst);
if (err) if (err)
goto teardown_ports; goto teardown_ports;
err = dsa_tree_setup_lags(dst); err = dsa_tree_setup_lags(dst);
if (err) if (err)
goto teardown_master; goto teardown_conduit;
dst->setup = true; dst->setup = true;
@ -908,8 +908,8 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
return 0; return 0;
teardown_master: teardown_conduit:
dsa_tree_teardown_master(dst); dsa_tree_teardown_conduit(dst);
teardown_ports: teardown_ports:
dsa_tree_teardown_ports(dst); dsa_tree_teardown_ports(dst);
teardown_switches: teardown_switches:
@ -929,7 +929,7 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
dsa_tree_teardown_lags(dst); dsa_tree_teardown_lags(dst);
dsa_tree_teardown_master(dst); dsa_tree_teardown_conduit(dst);
dsa_tree_teardown_ports(dst); dsa_tree_teardown_ports(dst);
@ -978,7 +978,7 @@ out_disconnect:
return err; return err;
} }
/* Since the dsa/tagging sysfs device attribute is per master, the assumption /* Since the dsa/tagging sysfs device attribute is per conduit, the assumption
* is that all DSA switches within a tree share the same tagger, otherwise * is that all DSA switches within a tree share the same tagger, otherwise
* they would have formed disjoint trees (different "dsa,member" values). * they would have formed disjoint trees (different "dsa,member" values).
*/ */
@ -999,10 +999,10 @@ int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
* restriction, there needs to be another mutex which serializes this. * restriction, there needs to be another mutex which serializes this.
*/ */
dsa_tree_for_each_user_port(dp, dst) { dsa_tree_for_each_user_port(dp, dst) {
if (dsa_port_to_master(dp)->flags & IFF_UP) if (dsa_port_to_conduit(dp)->flags & IFF_UP)
goto out_unlock; goto out_unlock;
if (dp->slave->flags & IFF_UP) if (dp->user->flags & IFF_UP)
goto out_unlock; goto out_unlock;
} }
@ -1028,62 +1028,62 @@ out_unlock:
return err; return err;
} }
static void dsa_tree_master_state_change(struct dsa_switch_tree *dst, static void dsa_tree_conduit_state_change(struct dsa_switch_tree *dst,
struct net_device *master) struct net_device *conduit)
{ {
struct dsa_notifier_master_state_info info; struct dsa_notifier_conduit_state_info info;
struct dsa_port *cpu_dp = master->dsa_ptr; struct dsa_port *cpu_dp = conduit->dsa_ptr;
info.master = master; info.conduit = conduit;
info.operational = dsa_port_master_is_operational(cpu_dp); info.operational = dsa_port_conduit_is_operational(cpu_dp);
dsa_tree_notify(dst, DSA_NOTIFIER_MASTER_STATE_CHANGE, &info); dsa_tree_notify(dst, DSA_NOTIFIER_CONDUIT_STATE_CHANGE, &info);
} }
void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst, void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst,
struct net_device *master, struct net_device *conduit,
bool up)
{
struct dsa_port *cpu_dp = conduit->dsa_ptr;
bool notify = false;
/* Don't keep track of admin state on LAG DSA conduits,
* but rather just of physical DSA conduits
*/
if (netif_is_lag_master(conduit))
return;
if ((dsa_port_conduit_is_operational(cpu_dp)) !=
(up && cpu_dp->conduit_oper_up))
notify = true;
cpu_dp->conduit_admin_up = up;
if (notify)
dsa_tree_conduit_state_change(dst, conduit);
}
void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst,
struct net_device *conduit,
bool up) bool up)
{ {
struct dsa_port *cpu_dp = master->dsa_ptr; struct dsa_port *cpu_dp = conduit->dsa_ptr;
bool notify = false; bool notify = false;
/* Don't keep track of admin state on LAG DSA masters, /* Don't keep track of oper state on LAG DSA conduits,
* but rather just of physical DSA masters * but rather just of physical DSA conduits
*/ */
if (netif_is_lag_master(master)) if (netif_is_lag_master(conduit))
return; return;
if ((dsa_port_master_is_operational(cpu_dp)) != if ((dsa_port_conduit_is_operational(cpu_dp)) !=
(up && cpu_dp->master_oper_up)) (cpu_dp->conduit_admin_up && up))
notify = true; notify = true;
cpu_dp->master_admin_up = up; cpu_dp->conduit_oper_up = up;
if (notify) if (notify)
dsa_tree_master_state_change(dst, master); dsa_tree_conduit_state_change(dst, conduit);
}
void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst,
struct net_device *master,
bool up)
{
struct dsa_port *cpu_dp = master->dsa_ptr;
bool notify = false;
/* Don't keep track of oper state on LAG DSA masters,
* but rather just of physical DSA masters
*/
if (netif_is_lag_master(master))
return;
if ((dsa_port_master_is_operational(cpu_dp)) !=
(cpu_dp->master_admin_up && up))
notify = true;
cpu_dp->master_oper_up = up;
if (notify)
dsa_tree_master_state_change(dst, master);
} }
static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index) static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
@ -1129,7 +1129,7 @@ static int dsa_port_parse_dsa(struct dsa_port *dp)
} }
static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp, static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp,
struct net_device *master) struct net_device *conduit)
{ {
enum dsa_tag_protocol tag_protocol = DSA_TAG_PROTO_NONE; enum dsa_tag_protocol tag_protocol = DSA_TAG_PROTO_NONE;
struct dsa_switch *mds, *ds = dp->ds; struct dsa_switch *mds, *ds = dp->ds;
@ -1140,21 +1140,21 @@ static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp,
* happens the switch driver may want to know if its tagging protocol * happens the switch driver may want to know if its tagging protocol
* is going to work in such a configuration. * is going to work in such a configuration.
*/ */
if (dsa_slave_dev_check(master)) { if (dsa_user_dev_check(conduit)) {
mdp = dsa_slave_to_port(master); mdp = dsa_user_to_port(conduit);
mds = mdp->ds; mds = mdp->ds;
mdp_upstream = dsa_upstream_port(mds, mdp->index); mdp_upstream = dsa_upstream_port(mds, mdp->index);
tag_protocol = mds->ops->get_tag_protocol(mds, mdp_upstream, tag_protocol = mds->ops->get_tag_protocol(mds, mdp_upstream,
DSA_TAG_PROTO_NONE); DSA_TAG_PROTO_NONE);
} }
/* If the master device is not itself a DSA slave in a disjoint DSA /* If the conduit device is not itself a DSA user in a disjoint DSA
* tree, then return immediately. * tree, then return immediately.
*/ */
return ds->ops->get_tag_protocol(ds, dp->index, tag_protocol); return ds->ops->get_tag_protocol(ds, dp->index, tag_protocol);
} }
static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master, static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *conduit,
const char *user_protocol) const char *user_protocol)
{ {
const struct dsa_device_ops *tag_ops = NULL; const struct dsa_device_ops *tag_ops = NULL;
@ -1163,7 +1163,7 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
enum dsa_tag_protocol default_proto; enum dsa_tag_protocol default_proto;
/* Find out which protocol the switch would prefer. */ /* Find out which protocol the switch would prefer. */
default_proto = dsa_get_tag_protocol(dp, master); default_proto = dsa_get_tag_protocol(dp, conduit);
if (dst->default_proto) { if (dst->default_proto) {
if (dst->default_proto != default_proto) { if (dst->default_proto != default_proto) {
dev_err(ds->dev, dev_err(ds->dev,
@ -1218,7 +1218,7 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
dst->tag_ops = tag_ops; dst->tag_ops = tag_ops;
} }
dp->master = master; dp->conduit = conduit;
dp->type = DSA_PORT_TYPE_CPU; dp->type = DSA_PORT_TYPE_CPU;
dsa_port_set_tag_protocol(dp, dst->tag_ops); dsa_port_set_tag_protocol(dp, dst->tag_ops);
dp->dst = dst; dp->dst = dst;
@ -1248,16 +1248,16 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
dp->dn = dn; dp->dn = dn;
if (ethernet) { if (ethernet) {
struct net_device *master; struct net_device *conduit;
const char *user_protocol; const char *user_protocol;
master = of_find_net_device_by_node(ethernet); conduit = of_find_net_device_by_node(ethernet);
of_node_put(ethernet); of_node_put(ethernet);
if (!master) if (!conduit)
return -EPROBE_DEFER; return -EPROBE_DEFER;
user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL); user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL);
return dsa_port_parse_cpu(dp, master, user_protocol); return dsa_port_parse_cpu(dp, conduit, user_protocol);
} }
if (link) if (link)
@ -1412,15 +1412,15 @@ static int dsa_port_parse(struct dsa_port *dp, const char *name,
struct device *dev) struct device *dev)
{ {
if (!strcmp(name, "cpu")) { if (!strcmp(name, "cpu")) {
struct net_device *master; struct net_device *conduit;
master = dsa_dev_to_net_device(dev); conduit = dsa_dev_to_net_device(dev);
if (!master) if (!conduit)
return -EPROBE_DEFER; return -EPROBE_DEFER;
dev_put(master); dev_put(conduit);
return dsa_port_parse_cpu(dp, master, NULL); return dsa_port_parse_cpu(dp, conduit, NULL);
} }
if (!strcmp(name, "dsa")) if (!strcmp(name, "dsa"))
@ -1566,14 +1566,14 @@ void dsa_unregister_switch(struct dsa_switch *ds)
} }
EXPORT_SYMBOL_GPL(dsa_unregister_switch); EXPORT_SYMBOL_GPL(dsa_unregister_switch);
/* If the DSA master chooses to unregister its net_device on .shutdown, DSA is /* If the DSA conduit chooses to unregister its net_device on .shutdown, DSA is
* blocking that operation from completion, due to the dev_hold taken inside * blocking that operation from completion, due to the dev_hold taken inside
* netdev_upper_dev_link. Unlink the DSA slave interfaces from being uppers of * netdev_upper_dev_link. Unlink the DSA user interfaces from being uppers of
* the DSA master, so that the system can reboot successfully. * the DSA conduit, so that the system can reboot successfully.
*/ */
void dsa_switch_shutdown(struct dsa_switch *ds) void dsa_switch_shutdown(struct dsa_switch *ds)
{ {
struct net_device *master, *slave_dev; struct net_device *conduit, *user_dev;
struct dsa_port *dp; struct dsa_port *dp;
mutex_lock(&dsa2_mutex); mutex_lock(&dsa2_mutex);
@ -1584,17 +1584,17 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
rtnl_lock(); rtnl_lock();
dsa_switch_for_each_user_port(dp, ds) { dsa_switch_for_each_user_port(dp, ds) {
master = dsa_port_to_master(dp); conduit = dsa_port_to_conduit(dp);
slave_dev = dp->slave; user_dev = dp->user;
netdev_upper_dev_unlink(master, slave_dev); netdev_upper_dev_unlink(conduit, user_dev);
} }
/* Disconnect from further netdevice notifiers on the master, /* Disconnect from further netdevice notifiers on the conduit,
* since netdev_uses_dsa() will now return false. * since netdev_uses_dsa() will now return false.
*/ */
dsa_switch_for_each_cpu_port(dp, ds) dsa_switch_for_each_cpu_port(dp, ds)
dp->master->dsa_ptr = NULL; dp->conduit->dsa_ptr = NULL;
rtnl_unlock(); rtnl_unlock();
out: out:
@ -1605,7 +1605,7 @@ EXPORT_SYMBOL_GPL(dsa_switch_shutdown);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static bool dsa_port_is_initialized(const struct dsa_port *dp) static bool dsa_port_is_initialized(const struct dsa_port *dp)
{ {
return dp->type == DSA_PORT_TYPE_USER && dp->slave; return dp->type == DSA_PORT_TYPE_USER && dp->user;
} }
int dsa_switch_suspend(struct dsa_switch *ds) int dsa_switch_suspend(struct dsa_switch *ds)
@ -1613,12 +1613,12 @@ int dsa_switch_suspend(struct dsa_switch *ds)
struct dsa_port *dp; struct dsa_port *dp;
int ret = 0; int ret = 0;
/* Suspend slave network devices */ /* Suspend user network devices */
dsa_switch_for_each_port(dp, ds) { dsa_switch_for_each_port(dp, ds) {
if (!dsa_port_is_initialized(dp)) if (!dsa_port_is_initialized(dp))
continue; continue;
ret = dsa_slave_suspend(dp->slave); ret = dsa_user_suspend(dp->user);
if (ret) if (ret)
return ret; return ret;
} }
@ -1641,12 +1641,12 @@ int dsa_switch_resume(struct dsa_switch *ds)
if (ret) if (ret)
return ret; return ret;
/* Resume slave network devices */ /* Resume user network devices */
dsa_switch_for_each_port(dp, ds) { dsa_switch_for_each_port(dp, ds) {
if (!dsa_port_is_initialized(dp)) if (!dsa_port_is_initialized(dp))
continue; continue;
ret = dsa_slave_resume(dp->slave); ret = dsa_user_resume(dp->user);
if (ret) if (ret)
return ret; return ret;
} }
@ -1658,10 +1658,10 @@ EXPORT_SYMBOL_GPL(dsa_switch_resume);
struct dsa_port *dsa_port_from_netdev(struct net_device *netdev) struct dsa_port *dsa_port_from_netdev(struct net_device *netdev)
{ {
if (!netdev || !dsa_slave_dev_check(netdev)) if (!netdev || !dsa_user_dev_check(netdev))
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
return dsa_slave_to_port(netdev); return dsa_user_to_port(netdev);
} }
EXPORT_SYMBOL_GPL(dsa_port_from_netdev); EXPORT_SYMBOL_GPL(dsa_port_from_netdev);
@ -1726,7 +1726,7 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port,
EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db);
static const struct dsa_stubs __dsa_stubs = { static const struct dsa_stubs __dsa_stubs = {
.master_hwtstamp_validate = __dsa_master_hwtstamp_validate, .conduit_hwtstamp_validate = __dsa_conduit_hwtstamp_validate,
}; };
static void dsa_register_stubs(void) static void dsa_register_stubs(void)
@ -1748,7 +1748,7 @@ static int __init dsa_init_module(void)
if (!dsa_owq) if (!dsa_owq)
return -ENOMEM; return -ENOMEM;
rc = dsa_slave_register_notifier(); rc = dsa_user_register_notifier();
if (rc) if (rc)
goto register_notifier_fail; goto register_notifier_fail;
@ -1763,7 +1763,7 @@ static int __init dsa_init_module(void)
return 0; return 0;
netlink_register_fail: netlink_register_fail:
dsa_slave_unregister_notifier(); dsa_user_unregister_notifier();
dev_remove_pack(&dsa_pack_type); dev_remove_pack(&dsa_pack_type);
register_notifier_fail: register_notifier_fail:
destroy_workqueue(dsa_owq); destroy_workqueue(dsa_owq);
@ -1778,7 +1778,7 @@ static void __exit dsa_cleanup_module(void)
rtnl_link_unregister(&dsa_link_ops); rtnl_link_unregister(&dsa_link_ops);
dsa_slave_unregister_notifier(); dsa_user_unregister_notifier();
dev_remove_pack(&dsa_pack_type); dev_remove_pack(&dsa_pack_type);
destroy_workqueue(dsa_owq); destroy_workqueue(dsa_owq);
} }

View File

@ -21,16 +21,16 @@ void dsa_lag_map(struct dsa_switch_tree *dst, struct dsa_lag *lag);
void dsa_lag_unmap(struct dsa_switch_tree *dst, struct dsa_lag *lag); void dsa_lag_unmap(struct dsa_switch_tree *dst, struct dsa_lag *lag);
struct dsa_lag *dsa_tree_lag_find(struct dsa_switch_tree *dst, struct dsa_lag *dsa_tree_lag_find(struct dsa_switch_tree *dst,
const struct net_device *lag_dev); const struct net_device *lag_dev);
struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst); struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst);
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst, int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
const struct dsa_device_ops *tag_ops, const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops); const struct dsa_device_ops *old_tag_ops);
void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst, void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst,
struct net_device *master, struct net_device *conduit,
bool up);
void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst,
struct net_device *conduit,
bool up); bool up);
void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst,
struct net_device *master,
bool up);
unsigned int dsa_bridge_num_get(const struct net_device *bridge_dev, int max); unsigned int dsa_bridge_num_get(const struct net_device *bridge_dev, int max);
void dsa_bridge_num_put(const struct net_device *bridge_dev, void dsa_bridge_num_put(const struct net_device *bridge_dev,
unsigned int bridge_num); unsigned int bridge_num);

View File

@ -1,22 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef __DSA_MASTER_H
#define __DSA_MASTER_H
struct dsa_port;
struct net_device;
struct netdev_lag_upper_info;
struct netlink_ext_ack;
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp);
void dsa_master_teardown(struct net_device *dev);
int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack);
void dsa_master_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp);
int __dsa_master_hwtstamp_validate(struct net_device *dev,
const struct kernel_hwtstamp_config *config,
struct netlink_ext_ack *extack);
#endif

View File

@ -5,7 +5,7 @@
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
#include "netlink.h" #include "netlink.h"
#include "slave.h" #include "user.h"
static const struct nla_policy dsa_policy[IFLA_DSA_MAX + 1] = { static const struct nla_policy dsa_policy[IFLA_DSA_MAX + 1] = {
[IFLA_DSA_MASTER] = { .type = NLA_U32 }, [IFLA_DSA_MASTER] = { .type = NLA_U32 },
@ -22,13 +22,13 @@ static int dsa_changelink(struct net_device *dev, struct nlattr *tb[],
if (data[IFLA_DSA_MASTER]) { if (data[IFLA_DSA_MASTER]) {
u32 ifindex = nla_get_u32(data[IFLA_DSA_MASTER]); u32 ifindex = nla_get_u32(data[IFLA_DSA_MASTER]);
struct net_device *master; struct net_device *conduit;
master = __dev_get_by_index(dev_net(dev), ifindex); conduit = __dev_get_by_index(dev_net(dev), ifindex);
if (!master) if (!conduit)
return -EINVAL; return -EINVAL;
err = dsa_slave_change_master(dev, master, extack); err = dsa_user_change_conduit(dev, conduit, extack);
if (err) if (err)
return err; return err;
} }
@ -44,9 +44,9 @@ static size_t dsa_get_size(const struct net_device *dev)
static int dsa_fill_info(struct sk_buff *skb, const struct net_device *dev) static int dsa_fill_info(struct sk_buff *skb, const struct net_device *dev)
{ {
struct net_device *master = dsa_slave_to_master(dev); struct net_device *conduit = dsa_user_to_conduit(dev);
if (nla_put_u32(skb, IFLA_DSA_MASTER, master->ifindex)) if (nla_put_u32(skb, IFLA_DSA_MASTER, conduit->ifindex))
return -EMSGSIZE; return -EMSGSIZE;
return 0; return 0;

View File

@ -14,9 +14,9 @@
#include "dsa.h" #include "dsa.h"
#include "port.h" #include "port.h"
#include "slave.h"
#include "switch.h" #include "switch.h"
#include "tag_8021q.h" #include "tag_8021q.h"
#include "user.h"
/** /**
* dsa_port_notify - Notify the switching fabric of changes to a port * dsa_port_notify - Notify the switching fabric of changes to a port
@ -289,7 +289,7 @@ static void dsa_port_reset_vlan_filtering(struct dsa_port *dp,
} }
/* If the bridge was vlan_filtering, the bridge core doesn't trigger an /* If the bridge was vlan_filtering, the bridge core doesn't trigger an
* event for changing vlan_filtering setting upon slave ports leaving * event for changing vlan_filtering setting upon user ports leaving
* it. That is a good thing, because that lets us handle it and also * it. That is a good thing, because that lets us handle it and also
* handle the case where the switch's vlan_filtering setting is global * handle the case where the switch's vlan_filtering setting is global
* (not per port). When that happens, the correct moment to trigger the * (not per port). When that happens, the correct moment to trigger the
@ -489,7 +489,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
.dp = dp, .dp = dp,
.extack = extack, .extack = extack,
}; };
struct net_device *dev = dp->slave; struct net_device *dev = dp->user;
struct net_device *brport_dev; struct net_device *brport_dev;
int err; int err;
@ -514,8 +514,8 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
dp->bridge->tx_fwd_offload = info.tx_fwd_offload; dp->bridge->tx_fwd_offload = info.tx_fwd_offload;
err = switchdev_bridge_port_offload(brport_dev, dev, dp, err = switchdev_bridge_port_offload(brport_dev, dev, dp,
&dsa_slave_switchdev_notifier, &dsa_user_switchdev_notifier,
&dsa_slave_switchdev_blocking_notifier, &dsa_user_switchdev_blocking_notifier,
dp->bridge->tx_fwd_offload, extack); dp->bridge->tx_fwd_offload, extack);
if (err) if (err)
goto out_rollback_unbridge; goto out_rollback_unbridge;
@ -528,8 +528,8 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
out_rollback_unoffload: out_rollback_unoffload:
switchdev_bridge_port_unoffload(brport_dev, dp, switchdev_bridge_port_unoffload(brport_dev, dp,
&dsa_slave_switchdev_notifier, &dsa_user_switchdev_notifier,
&dsa_slave_switchdev_blocking_notifier); &dsa_user_switchdev_blocking_notifier);
dsa_flush_workqueue(); dsa_flush_workqueue();
out_rollback_unbridge: out_rollback_unbridge:
dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info); dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info);
@ -547,8 +547,8 @@ void dsa_port_pre_bridge_leave(struct dsa_port *dp, struct net_device *br)
return; return;
switchdev_bridge_port_unoffload(brport_dev, dp, switchdev_bridge_port_unoffload(brport_dev, dp,
&dsa_slave_switchdev_notifier, &dsa_user_switchdev_notifier,
&dsa_slave_switchdev_blocking_notifier); &dsa_user_switchdev_blocking_notifier);
dsa_flush_workqueue(); dsa_flush_workqueue();
} }
@ -741,10 +741,10 @@ static bool dsa_port_can_apply_vlan_filtering(struct dsa_port *dp,
*/ */
if (vlan_filtering && dsa_port_is_user(dp)) { if (vlan_filtering && dsa_port_is_user(dp)) {
struct net_device *br = dsa_port_bridge_dev_get(dp); struct net_device *br = dsa_port_bridge_dev_get(dp);
struct net_device *upper_dev, *slave = dp->slave; struct net_device *upper_dev, *user = dp->user;
struct list_head *iter; struct list_head *iter;
netdev_for_each_upper_dev_rcu(slave, upper_dev, iter) { netdev_for_each_upper_dev_rcu(user, upper_dev, iter) {
struct bridge_vlan_info br_info; struct bridge_vlan_info br_info;
u16 vid; u16 vid;
@ -803,9 +803,9 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
if (!ds->ops->port_vlan_filtering) if (!ds->ops->port_vlan_filtering)
return -EOPNOTSUPP; return -EOPNOTSUPP;
/* We are called from dsa_slave_switchdev_blocking_event(), /* We are called from dsa_user_switchdev_blocking_event(),
* which is not under rcu_read_lock(), unlike * which is not under rcu_read_lock(), unlike
* dsa_slave_switchdev_event(). * dsa_user_switchdev_event().
*/ */
rcu_read_lock(); rcu_read_lock();
apply = dsa_port_can_apply_vlan_filtering(dp, vlan_filtering, extack); apply = dsa_port_can_apply_vlan_filtering(dp, vlan_filtering, extack);
@ -827,24 +827,24 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
ds->vlan_filtering = vlan_filtering; ds->vlan_filtering = vlan_filtering;
dsa_switch_for_each_user_port(other_dp, ds) { dsa_switch_for_each_user_port(other_dp, ds) {
struct net_device *slave = other_dp->slave; struct net_device *user = other_dp->user;
/* We might be called in the unbind path, so not /* We might be called in the unbind path, so not
* all slave devices might still be registered. * all user devices might still be registered.
*/ */
if (!slave) if (!user)
continue; continue;
err = dsa_slave_manage_vlan_filtering(slave, err = dsa_user_manage_vlan_filtering(user,
vlan_filtering); vlan_filtering);
if (err) if (err)
goto restore; goto restore;
} }
} else { } else {
dp->vlan_filtering = vlan_filtering; dp->vlan_filtering = vlan_filtering;
err = dsa_slave_manage_vlan_filtering(dp->slave, err = dsa_user_manage_vlan_filtering(dp->user,
vlan_filtering); vlan_filtering);
if (err) if (err)
goto restore; goto restore;
} }
@ -863,7 +863,7 @@ restore:
} }
/* This enforces legacy behavior for switch drivers which assume they can't /* This enforces legacy behavior for switch drivers which assume they can't
* receive VLAN configuration when enslaved to a bridge with vlan_filtering=0 * receive VLAN configuration when joining a bridge with vlan_filtering=0
*/ */
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp) bool dsa_port_skip_vlan_configuration(struct dsa_port *dp)
{ {
@ -1047,7 +1047,7 @@ int dsa_port_standalone_host_fdb_add(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_add(struct dsa_port *dp, int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
const unsigned char *addr, u16 vid) const unsigned char *addr, u16 vid)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = { struct dsa_db db = {
.type = DSA_DB_BRIDGE, .type = DSA_DB_BRIDGE,
.bridge = *dp->bridge, .bridge = *dp->bridge,
@ -1057,12 +1057,12 @@ int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
if (!dp->ds->fdb_isolation) if (!dp->ds->fdb_isolation)
db.bridge.num = 0; db.bridge.num = 0;
/* Avoid a call to __dev_set_promiscuity() on the master, which /* Avoid a call to __dev_set_promiscuity() on the conduit, which
* requires rtnl_lock(), since we can't guarantee that is held here, * requires rtnl_lock(), since we can't guarantee that is held here,
* and we can't take it either. * and we can't take it either.
*/ */
if (master->priv_flags & IFF_UNICAST_FLT) { if (conduit->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_add(master, addr); err = dev_uc_add(conduit, addr);
if (err) if (err)
return err; return err;
} }
@ -1098,7 +1098,7 @@ int dsa_port_standalone_host_fdb_del(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_del(struct dsa_port *dp, int dsa_port_bridge_host_fdb_del(struct dsa_port *dp,
const unsigned char *addr, u16 vid) const unsigned char *addr, u16 vid)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = { struct dsa_db db = {
.type = DSA_DB_BRIDGE, .type = DSA_DB_BRIDGE,
.bridge = *dp->bridge, .bridge = *dp->bridge,
@ -1108,8 +1108,8 @@ int dsa_port_bridge_host_fdb_del(struct dsa_port *dp,
if (!dp->ds->fdb_isolation) if (!dp->ds->fdb_isolation)
db.bridge.num = 0; db.bridge.num = 0;
if (master->priv_flags & IFF_UNICAST_FLT) { if (conduit->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_del(master, addr); err = dev_uc_del(conduit, addr);
if (err) if (err)
return err; return err;
} }
@ -1229,7 +1229,7 @@ int dsa_port_standalone_host_mdb_add(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp, int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb) const struct switchdev_obj_port_mdb *mdb)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = { struct dsa_db db = {
.type = DSA_DB_BRIDGE, .type = DSA_DB_BRIDGE,
.bridge = *dp->bridge, .bridge = *dp->bridge,
@ -1239,7 +1239,7 @@ int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp,
if (!dp->ds->fdb_isolation) if (!dp->ds->fdb_isolation)
db.bridge.num = 0; db.bridge.num = 0;
err = dev_mc_add(master, mdb->addr); err = dev_mc_add(conduit, mdb->addr);
if (err) if (err)
return err; return err;
@ -1273,7 +1273,7 @@ int dsa_port_standalone_host_mdb_del(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp, int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb) const struct switchdev_obj_port_mdb *mdb)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = { struct dsa_db db = {
.type = DSA_DB_BRIDGE, .type = DSA_DB_BRIDGE,
.bridge = *dp->bridge, .bridge = *dp->bridge,
@ -1283,7 +1283,7 @@ int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp,
if (!dp->ds->fdb_isolation) if (!dp->ds->fdb_isolation)
db.bridge.num = 0; db.bridge.num = 0;
err = dev_mc_del(master, mdb->addr); err = dev_mc_del(conduit, mdb->addr);
if (err) if (err)
return err; return err;
@ -1318,7 +1318,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan, const struct switchdev_obj_port_vlan *vlan,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_notifier_vlan_info info = { struct dsa_notifier_vlan_info info = {
.dp = dp, .dp = dp,
.vlan = vlan, .vlan = vlan,
@ -1330,7 +1330,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
if (err && err != -EOPNOTSUPP) if (err && err != -EOPNOTSUPP)
return err; return err;
vlan_vid_add(master, htons(ETH_P_8021Q), vlan->vid); vlan_vid_add(conduit, htons(ETH_P_8021Q), vlan->vid);
return err; return err;
} }
@ -1338,7 +1338,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
int dsa_port_host_vlan_del(struct dsa_port *dp, int dsa_port_host_vlan_del(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan) const struct switchdev_obj_port_vlan *vlan)
{ {
struct net_device *master = dsa_port_to_master(dp); struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_notifier_vlan_info info = { struct dsa_notifier_vlan_info info = {
.dp = dp, .dp = dp,
.vlan = vlan, .vlan = vlan,
@ -1349,7 +1349,7 @@ int dsa_port_host_vlan_del(struct dsa_port *dp,
if (err && err != -EOPNOTSUPP) if (err && err != -EOPNOTSUPP)
return err; return err;
vlan_vid_del(master, htons(ETH_P_8021Q), vlan->vid); vlan_vid_del(conduit, htons(ETH_P_8021Q), vlan->vid);
return err; return err;
} }
@ -1398,24 +1398,24 @@ int dsa_port_mrp_del_ring_role(const struct dsa_port *dp,
return ds->ops->port_mrp_del_ring_role(ds, dp->index, mrp); return ds->ops->port_mrp_del_ring_role(ds, dp->index, mrp);
} }
static int dsa_port_assign_master(struct dsa_port *dp, static int dsa_port_assign_conduit(struct dsa_port *dp,
struct net_device *master, struct net_device *conduit,
struct netlink_ext_ack *extack, struct netlink_ext_ack *extack,
bool fail_on_err) bool fail_on_err)
{ {
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
int port = dp->index, err; int port = dp->index, err;
err = ds->ops->port_change_master(ds, port, master, extack); err = ds->ops->port_change_conduit(ds, port, conduit, extack);
if (err && !fail_on_err) if (err && !fail_on_err)
dev_err(ds->dev, "port %d failed to assign master %s: %pe\n", dev_err(ds->dev, "port %d failed to assign conduit %s: %pe\n",
port, master->name, ERR_PTR(err)); port, conduit->name, ERR_PTR(err));
if (err && fail_on_err) if (err && fail_on_err)
return err; return err;
dp->cpu_dp = master->dsa_ptr; dp->cpu_dp = conduit->dsa_ptr;
dp->cpu_port_in_lag = netif_is_lag_master(master); dp->cpu_port_in_lag = netif_is_lag_master(conduit);
return 0; return 0;
} }
@ -1428,12 +1428,12 @@ static int dsa_port_assign_master(struct dsa_port *dp,
* the old CPU port before changing it, and restore it on errors during the * the old CPU port before changing it, and restore it on errors during the
* bringup of the new one. * bringup of the new one.
*/ */
int dsa_port_change_master(struct dsa_port *dp, struct net_device *master, int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
struct net_device *bridge_dev = dsa_port_bridge_dev_get(dp); struct net_device *bridge_dev = dsa_port_bridge_dev_get(dp);
struct net_device *old_master = dsa_port_to_master(dp); struct net_device *old_conduit = dsa_port_to_conduit(dp);
struct net_device *dev = dp->slave; struct net_device *dev = dp->user;
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
bool vlan_filtering; bool vlan_filtering;
int err, tmp; int err, tmp;
@ -1454,7 +1454,7 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
*/ */
vlan_filtering = dsa_port_is_vlan_filtering(dp); vlan_filtering = dsa_port_is_vlan_filtering(dp);
if (vlan_filtering) { if (vlan_filtering) {
err = dsa_slave_manage_vlan_filtering(dev, false); err = dsa_user_manage_vlan_filtering(dev, false);
if (err) { if (err) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Failed to remove standalone VLANs"); "Failed to remove standalone VLANs");
@ -1465,16 +1465,16 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
/* Standalone addresses, and addresses of upper interfaces like /* Standalone addresses, and addresses of upper interfaces like
* VLAN, LAG, HSR need to be migrated. * VLAN, LAG, HSR need to be migrated.
*/ */
dsa_slave_unsync_ha(dev); dsa_user_unsync_ha(dev);
err = dsa_port_assign_master(dp, master, extack, true); err = dsa_port_assign_conduit(dp, conduit, extack, true);
if (err) if (err)
goto rewind_old_addrs; goto rewind_old_addrs;
dsa_slave_sync_ha(dev); dsa_user_sync_ha(dev);
if (vlan_filtering) { if (vlan_filtering) {
err = dsa_slave_manage_vlan_filtering(dev, true); err = dsa_user_manage_vlan_filtering(dev, true);
if (err) { if (err) {
NL_SET_ERR_MSG_MOD(extack, NL_SET_ERR_MSG_MOD(extack,
"Failed to restore standalone VLANs"); "Failed to restore standalone VLANs");
@ -1495,19 +1495,19 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
rewind_new_vlan: rewind_new_vlan:
if (vlan_filtering) if (vlan_filtering)
dsa_slave_manage_vlan_filtering(dev, false); dsa_user_manage_vlan_filtering(dev, false);
rewind_new_addrs: rewind_new_addrs:
dsa_slave_unsync_ha(dev); dsa_user_unsync_ha(dev);
dsa_port_assign_master(dp, old_master, NULL, false); dsa_port_assign_conduit(dp, old_conduit, NULL, false);
/* Restore the objects on the old CPU port */ /* Restore the objects on the old CPU port */
rewind_old_addrs: rewind_old_addrs:
dsa_slave_sync_ha(dev); dsa_user_sync_ha(dev);
if (vlan_filtering) { if (vlan_filtering) {
tmp = dsa_slave_manage_vlan_filtering(dev, true); tmp = dsa_user_manage_vlan_filtering(dev, true);
if (tmp) { if (tmp) {
dev_err(ds->dev, dev_err(ds->dev,
"port %d failed to restore standalone VLANs: %pe\n", "port %d failed to restore standalone VLANs: %pe\n",
@ -1620,7 +1620,7 @@ static void dsa_port_phylink_mac_link_down(struct phylink_config *config,
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
if (dsa_port_is_user(dp)) if (dsa_port_is_user(dp))
phydev = dp->slave->phydev; phydev = dp->user->phydev;
if (!ds->ops->phylink_mac_link_down) { if (!ds->ops->phylink_mac_link_down) {
if (ds->ops->adjust_link && phydev) if (ds->ops->adjust_link && phydev)
@ -1808,7 +1808,7 @@ err_phy_connect:
* their type. * their type.
* *
* User ports with no phy-handle or fixed-link are expected to connect to an * User ports with no phy-handle or fixed-link are expected to connect to an
* internal PHY located on the ds->slave_mii_bus at an MDIO address equal to * internal PHY located on the ds->user_mii_bus at an MDIO address equal to
* the port number. This description is still actively supported. * the port number. This description is still actively supported.
* *
* Shared (CPU and DSA) ports with no phy-handle or fixed-link are expected to * Shared (CPU and DSA) ports with no phy-handle or fixed-link are expected to
@ -1829,7 +1829,7 @@ err_phy_connect:
* a fixed-link, a phy-handle, or a managed = "in-band-status" property. * a fixed-link, a phy-handle, or a managed = "in-band-status" property.
* It becomes the responsibility of the driver to ensure that these ports * It becomes the responsibility of the driver to ensure that these ports
* operate at the maximum speed (whatever this means) and will interoperate * operate at the maximum speed (whatever this means) and will interoperate
* with the DSA master or other cascade port, since phylink methods will not be * with the DSA conduit or other cascade port, since phylink methods will not be
* invoked for them. * invoked for them.
* *
* If you are considering expanding this table for newly introduced switches, * If you are considering expanding this table for newly introduced switches,

View File

@ -109,7 +109,7 @@ void dsa_port_hsr_leave(struct dsa_port *dp, struct net_device *hsr);
int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid, bool broadcast); int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid, bool broadcast); void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_set_host_flood(struct dsa_port *dp, bool uc, bool mc); void dsa_port_set_host_flood(struct dsa_port *dp, bool uc, bool mc);
int dsa_port_change_master(struct dsa_port *dp, struct net_device *master, int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
#endif #endif

View File

@ -1,69 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef __DSA_SLAVE_H
#define __DSA_SLAVE_H
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
#include <linux/list.h>
#include <linux/netpoll.h>
#include <linux/types.h>
#include <net/dsa.h>
#include <net/gro_cells.h>
struct net_device;
struct netlink_ext_ack;
extern struct notifier_block dsa_slave_switchdev_notifier;
extern struct notifier_block dsa_slave_switchdev_blocking_notifier;
struct dsa_slave_priv {
/* Copy of CPU port xmit for faster access in slave transmit hot path */
struct sk_buff * (*xmit)(struct sk_buff *skb,
struct net_device *dev);
struct gro_cells gcells;
/* DSA port data, such as switch, port index, etc. */
struct dsa_port *dp;
#ifdef CONFIG_NET_POLL_CONTROLLER
struct netpoll *netpoll;
#endif
/* TC context */
struct list_head mall_tc_list;
};
void dsa_slave_mii_bus_init(struct dsa_switch *ds);
int dsa_slave_create(struct dsa_port *dp);
void dsa_slave_destroy(struct net_device *slave_dev);
int dsa_slave_suspend(struct net_device *slave_dev);
int dsa_slave_resume(struct net_device *slave_dev);
int dsa_slave_register_notifier(void);
void dsa_slave_unregister_notifier(void);
void dsa_slave_sync_ha(struct net_device *dev);
void dsa_slave_unsync_ha(struct net_device *dev);
void dsa_slave_setup_tagger(struct net_device *slave);
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu);
int dsa_slave_change_master(struct net_device *dev, struct net_device *master,
struct netlink_ext_ack *extack);
int dsa_slave_manage_vlan_filtering(struct net_device *dev,
bool vlan_filtering);
static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
{
struct dsa_slave_priv *p = netdev_priv(dev);
return p->dp;
}
static inline struct net_device *
dsa_slave_to_master(const struct net_device *dev)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
return dsa_port_to_master(dp);
}
#endif

View File

@ -15,10 +15,10 @@
#include "dsa.h" #include "dsa.h"
#include "netlink.h" #include "netlink.h"
#include "port.h" #include "port.h"
#include "slave.h"
#include "switch.h" #include "switch.h"
#include "tag_8021q.h" #include "tag_8021q.h"
#include "trace.h" #include "trace.h"
#include "user.h"
static unsigned int dsa_switch_fastest_ageing_time(struct dsa_switch *ds, static unsigned int dsa_switch_fastest_ageing_time(struct dsa_switch *ds,
unsigned int ageing_time) unsigned int ageing_time)
@ -894,12 +894,12 @@ static int dsa_switch_change_tag_proto(struct dsa_switch *ds,
* bits that depend on the tagger, such as the MTU. * bits that depend on the tagger, such as the MTU.
*/ */
dsa_switch_for_each_user_port(dp, ds) { dsa_switch_for_each_user_port(dp, ds) {
struct net_device *slave = dp->slave; struct net_device *user = dp->user;
dsa_slave_setup_tagger(slave); dsa_user_setup_tagger(user);
/* rtnl_mutex is held in dsa_tree_change_tag_proto */ /* rtnl_mutex is held in dsa_tree_change_tag_proto */
dsa_slave_change_mtu(slave, slave->mtu); dsa_user_change_mtu(user, user->mtu);
} }
return 0; return 0;
@ -960,13 +960,13 @@ dsa_switch_disconnect_tag_proto(struct dsa_switch *ds,
} }
static int static int
dsa_switch_master_state_change(struct dsa_switch *ds, dsa_switch_conduit_state_change(struct dsa_switch *ds,
struct dsa_notifier_master_state_info *info) struct dsa_notifier_conduit_state_info *info)
{ {
if (!ds->ops->master_state_change) if (!ds->ops->conduit_state_change)
return 0; return 0;
ds->ops->master_state_change(ds, info->master, info->operational); ds->ops->conduit_state_change(ds, info->conduit, info->operational);
return 0; return 0;
} }
@ -1056,8 +1056,8 @@ static int dsa_switch_event(struct notifier_block *nb,
case DSA_NOTIFIER_TAG_8021Q_VLAN_DEL: case DSA_NOTIFIER_TAG_8021Q_VLAN_DEL:
err = dsa_switch_tag_8021q_vlan_del(ds, info); err = dsa_switch_tag_8021q_vlan_del(ds, info);
break; break;
case DSA_NOTIFIER_MASTER_STATE_CHANGE: case DSA_NOTIFIER_CONDUIT_STATE_CHANGE:
err = dsa_switch_master_state_change(ds, info); err = dsa_switch_conduit_state_change(ds, info);
break; break;
default: default:
err = -EOPNOTSUPP; err = -EOPNOTSUPP;

View File

@ -34,7 +34,7 @@ enum {
DSA_NOTIFIER_TAG_PROTO_DISCONNECT, DSA_NOTIFIER_TAG_PROTO_DISCONNECT,
DSA_NOTIFIER_TAG_8021Q_VLAN_ADD, DSA_NOTIFIER_TAG_8021Q_VLAN_ADD,
DSA_NOTIFIER_TAG_8021Q_VLAN_DEL, DSA_NOTIFIER_TAG_8021Q_VLAN_DEL,
DSA_NOTIFIER_MASTER_STATE_CHANGE, DSA_NOTIFIER_CONDUIT_STATE_CHANGE,
}; };
/* DSA_NOTIFIER_AGEING_TIME */ /* DSA_NOTIFIER_AGEING_TIME */
@ -105,9 +105,9 @@ struct dsa_notifier_tag_8021q_vlan_info {
u16 vid; u16 vid;
}; };
/* DSA_NOTIFIER_MASTER_STATE_CHANGE */ /* DSA_NOTIFIER_CONDUIT_STATE_CHANGE */
struct dsa_notifier_master_state_info { struct dsa_notifier_conduit_state_info {
const struct net_device *master; const struct net_device *conduit;
bool operational; bool operational;
}; };

View File

@ -13,8 +13,8 @@
#include <net/dsa.h> #include <net/dsa.h>
#include <net/dst_metadata.h> #include <net/dst_metadata.h>
#include "slave.h"
#include "tag.h" #include "tag.h"
#include "user.h"
static LIST_HEAD(dsa_tag_drivers_list); static LIST_HEAD(dsa_tag_drivers_list);
static DEFINE_MUTEX(dsa_tag_drivers_lock); static DEFINE_MUTEX(dsa_tag_drivers_lock);
@ -27,7 +27,7 @@ static DEFINE_MUTEX(dsa_tag_drivers_lock);
* switch, the DSA driver owning the interface to which the packet is * switch, the DSA driver owning the interface to which the packet is
* delivered is never notified unless we do so here. * delivered is never notified unless we do so here.
*/ */
static bool dsa_skb_defer_rx_timestamp(struct dsa_slave_priv *p, static bool dsa_skb_defer_rx_timestamp(struct dsa_user_priv *p,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct dsa_switch *ds = p->dp->ds; struct dsa_switch *ds = p->dp->ds;
@ -57,7 +57,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
struct metadata_dst *md_dst = skb_metadata_dst(skb); struct metadata_dst *md_dst = skb_metadata_dst(skb);
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
struct sk_buff *nskb = NULL; struct sk_buff *nskb = NULL;
struct dsa_slave_priv *p; struct dsa_user_priv *p;
if (unlikely(!cpu_dp)) { if (unlikely(!cpu_dp)) {
kfree_skb(skb); kfree_skb(skb);
@ -75,7 +75,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
if (!skb_has_extensions(skb)) if (!skb_has_extensions(skb))
skb->slow_gro = 0; skb->slow_gro = 0;
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (likely(skb->dev)) { if (likely(skb->dev)) {
dsa_default_offload_fwd_mark(skb); dsa_default_offload_fwd_mark(skb);
nskb = skb; nskb = skb;
@ -94,7 +94,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
skb->pkt_type = PACKET_HOST; skb->pkt_type = PACKET_HOST;
skb->protocol = eth_type_trans(skb, skb->dev); skb->protocol = eth_type_trans(skb, skb->dev);
if (unlikely(!dsa_slave_dev_check(skb->dev))) { if (unlikely(!dsa_user_dev_check(skb->dev))) {
/* Packet is to be injected directly on an upper /* Packet is to be injected directly on an upper
* device, e.g. a team/bond, so skip all DSA-port * device, e.g. a team/bond, so skip all DSA-port
* specific actions. * specific actions.

View File

@ -9,7 +9,7 @@
#include <net/dsa.h> #include <net/dsa.h>
#include "port.h" #include "port.h"
#include "slave.h" #include "user.h"
struct dsa_tag_driver { struct dsa_tag_driver {
const struct dsa_device_ops *ops; const struct dsa_device_ops *ops;
@ -29,7 +29,7 @@ static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
return ops->needed_headroom + ops->needed_tailroom; return ops->needed_headroom + ops->needed_tailroom;
} }
static inline struct net_device *dsa_master_find_slave(struct net_device *dev, static inline struct net_device *dsa_conduit_find_user(struct net_device *dev,
int device, int port) int device, int port)
{ {
struct dsa_port *cpu_dp = dev->dsa_ptr; struct dsa_port *cpu_dp = dev->dsa_ptr;
@ -39,7 +39,7 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
list_for_each_entry(dp, &dst->ports, list) list_for_each_entry(dp, &dst->ports, list)
if (dp->ds->index == device && dp->index == port && if (dp->ds->index == device && dp->index == port &&
dp->type == DSA_PORT_TYPE_USER) dp->type == DSA_PORT_TYPE_USER)
return dp->slave; return dp->user;
return NULL; return NULL;
} }
@ -49,7 +49,7 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
*/ */
static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb) static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
{ {
struct dsa_port *dp = dsa_slave_to_port(skb->dev); struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct net_device *br = dsa_port_bridge_dev_get(dp); struct net_device *br = dsa_port_bridge_dev_get(dp);
struct net_device *dev = skb->dev; struct net_device *dev = skb->dev;
struct net_device *upper_dev; struct net_device *upper_dev;
@ -107,12 +107,12 @@ static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
* to support termination through the bridge. * to support termination through the bridge.
*/ */
static inline struct net_device * static inline struct net_device *
dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid) dsa_find_designated_bridge_port_by_vid(struct net_device *conduit, u16 vid)
{ {
struct dsa_port *cpu_dp = master->dsa_ptr; struct dsa_port *cpu_dp = conduit->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst; struct dsa_switch_tree *dst = cpu_dp->dst;
struct bridge_vlan_info vinfo; struct bridge_vlan_info vinfo;
struct net_device *slave; struct net_device *user;
struct dsa_port *dp; struct dsa_port *dp;
int err; int err;
@ -134,13 +134,13 @@ dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid)
if (dp->cpu_dp != cpu_dp) if (dp->cpu_dp != cpu_dp)
continue; continue;
slave = dp->slave; user = dp->user;
err = br_vlan_get_info_rcu(slave, vid, &vinfo); err = br_vlan_get_info_rcu(user, vid, &vinfo);
if (err) if (err)
continue; continue;
return slave; return user;
} }
return NULL; return NULL;
@ -155,7 +155,7 @@ dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid)
*/ */
static inline void dsa_default_offload_fwd_mark(struct sk_buff *skb) static inline void dsa_default_offload_fwd_mark(struct sk_buff *skb)
{ {
struct dsa_port *dp = dsa_slave_to_port(skb->dev); struct dsa_port *dp = dsa_user_to_port(skb->dev);
skb->offload_fwd_mark = !!(dp->bridge); skb->offload_fwd_mark = !!(dp->bridge);
} }
@ -215,9 +215,9 @@ static inline void dsa_alloc_etype_header(struct sk_buff *skb, int len)
memmove(skb->data, skb->data + len, 2 * ETH_ALEN); memmove(skb->data, skb->data + len, 2 * ETH_ALEN);
} }
/* On RX, eth_type_trans() on the DSA master pulls ETH_HLEN bytes starting from /* On RX, eth_type_trans() on the DSA conduit pulls ETH_HLEN bytes starting from
* skb_mac_header(skb), which leaves skb->data pointing at the first byte after * skb_mac_header(skb), which leaves skb->data pointing at the first byte after
* what the DSA master perceives as the EtherType (the beginning of the L3 * what the DSA conduit perceives as the EtherType (the beginning of the L3
* protocol). Since DSA EtherType header taggers treat the EtherType as part of * protocol). Since DSA EtherType header taggers treat the EtherType as part of
* the DSA tag itself, and the EtherType is 2 bytes in length, the DSA header * the DSA tag itself, and the EtherType is 2 bytes in length, the DSA header
* is located 2 bytes behind skb->data. Note that EtherType in this context * is located 2 bytes behind skb->data. Note that EtherType in this context

View File

@ -73,7 +73,7 @@ struct dsa_tag_8021q_vlan {
struct dsa_8021q_context { struct dsa_8021q_context {
struct dsa_switch *ds; struct dsa_switch *ds;
struct list_head vlans; struct list_head vlans;
/* EtherType of RX VID, used for filtering on master interface */ /* EtherType of RX VID, used for filtering on conduit interface */
__be16 proto; __be16 proto;
}; };
@ -338,7 +338,7 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx; struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port); struct dsa_port *dp = dsa_to_port(ds, port);
u16 vid = dsa_tag_8021q_standalone_vid(dp); u16 vid = dsa_tag_8021q_standalone_vid(dp);
struct net_device *master; struct net_device *conduit;
int err; int err;
/* The CPU port is implicitly configured by /* The CPU port is implicitly configured by
@ -347,7 +347,7 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp)) if (!dsa_port_is_user(dp))
return 0; return 0;
master = dsa_port_to_master(dp); conduit = dsa_port_to_conduit(dp);
err = dsa_port_tag_8021q_vlan_add(dp, vid, false); err = dsa_port_tag_8021q_vlan_add(dp, vid, false);
if (err) { if (err) {
@ -357,8 +357,8 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
return err; return err;
} }
/* Add the VLAN to the master's RX filter. */ /* Add the VLAN to the conduit's RX filter. */
vlan_vid_add(master, ctx->proto, vid); vlan_vid_add(conduit, ctx->proto, vid);
return err; return err;
} }
@ -368,7 +368,7 @@ static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx; struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port); struct dsa_port *dp = dsa_to_port(ds, port);
u16 vid = dsa_tag_8021q_standalone_vid(dp); u16 vid = dsa_tag_8021q_standalone_vid(dp);
struct net_device *master; struct net_device *conduit;
/* The CPU port is implicitly configured by /* The CPU port is implicitly configured by
* configuring the front-panel ports * configuring the front-panel ports
@ -376,11 +376,11 @@ static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp)) if (!dsa_port_is_user(dp))
return; return;
master = dsa_port_to_master(dp); conduit = dsa_port_to_conduit(dp);
dsa_port_tag_8021q_vlan_del(dp, vid, false); dsa_port_tag_8021q_vlan_del(dp, vid, false);
vlan_vid_del(master, ctx->proto, vid); vlan_vid_del(conduit, ctx->proto, vid);
} }
static int dsa_tag_8021q_setup(struct dsa_switch *ds) static int dsa_tag_8021q_setup(struct dsa_switch *ds)
@ -468,10 +468,10 @@ struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
} }
EXPORT_SYMBOL_GPL(dsa_8021q_xmit); EXPORT_SYMBOL_GPL(dsa_8021q_xmit);
struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master, struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit,
int vbid) int vbid)
{ {
struct dsa_port *cpu_dp = master->dsa_ptr; struct dsa_port *cpu_dp = conduit->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst; struct dsa_switch_tree *dst = cpu_dp->dst;
struct dsa_port *dp; struct dsa_port *dp;
@ -490,7 +490,7 @@ struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master,
continue; continue;
if (dsa_port_bridge_num_get(dp) == vbid) if (dsa_port_bridge_num_get(dp) == vbid)
return dp->slave; return dp->user;
} }
return NULL; return NULL;

View File

@ -16,7 +16,7 @@ struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id, void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
int *vbid); int *vbid);
struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master, struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit,
int vbid); int vbid);
int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds, int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds,

View File

@ -29,7 +29,7 @@
static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb, static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
__le16 *phdr; __le16 *phdr;
u16 hdr; u16 hdr;
@ -74,7 +74,7 @@ static struct sk_buff *ar9331_tag_rcv(struct sk_buff *skb,
/* Get source port information */ /* Get source port information */
port = FIELD_GET(AR9331_HDR_PORT_NUM_MASK, hdr); port = FIELD_GET(AR9331_HDR_PORT_NUM_MASK, hdr);
skb->dev = dsa_master_find_slave(ndev, 0, port); skb->dev = dsa_conduit_find_user(ndev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -85,7 +85,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
struct net_device *dev, struct net_device *dev,
unsigned int offset) unsigned int offset)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u16 queue = skb_get_queue_mapping(skb); u16 queue = skb_get_queue_mapping(skb);
u8 *brcm_tag; u8 *brcm_tag;
@ -96,7 +96,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
* (including FCS and tag) because the length verification is done after * (including FCS and tag) because the length verification is done after
* the Broadcom tag is stripped off the ingress packet. * the Broadcom tag is stripped off the ingress packet.
* *
* Let dsa_slave_xmit() free the SKB * Let dsa_user_xmit() free the SKB
*/ */
if (__skb_put_padto(skb, ETH_ZLEN + BRCM_TAG_LEN, false)) if (__skb_put_padto(skb, ETH_ZLEN + BRCM_TAG_LEN, false))
return NULL; return NULL;
@ -119,7 +119,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
brcm_tag[2] = BRCM_IG_DSTMAP2_MASK; brcm_tag[2] = BRCM_IG_DSTMAP2_MASK;
brcm_tag[3] = (1 << dp->index) & BRCM_IG_DSTMAP1_MASK; brcm_tag[3] = (1 << dp->index) & BRCM_IG_DSTMAP1_MASK;
/* Now tell the master network device about the desired output queue /* Now tell the conduit network device about the desired output queue
* as well * as well
*/ */
skb_set_queue_mapping(skb, BRCM_TAG_SET_PORT_QUEUE(dp->index, queue)); skb_set_queue_mapping(skb, BRCM_TAG_SET_PORT_QUEUE(dp->index, queue));
@ -164,7 +164,7 @@ static struct sk_buff *brcm_tag_rcv_ll(struct sk_buff *skb,
/* Locate which port this is coming from */ /* Locate which port this is coming from */
source_port = brcm_tag[3] & BRCM_EG_PID_MASK; source_port = brcm_tag[3] & BRCM_EG_PID_MASK;
skb->dev = dsa_master_find_slave(dev, 0, source_port); skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;
@ -216,7 +216,7 @@ MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_BRCM, BRCM_NAME);
static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb, static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u8 *brcm_tag; u8 *brcm_tag;
/* The Ethernet switch we are interfaced with needs packets to be at /* The Ethernet switch we are interfaced with needs packets to be at
@ -226,7 +226,7 @@ static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb,
* (including FCS and tag) because the length verification is done after * (including FCS and tag) because the length verification is done after
* the Broadcom tag is stripped off the ingress packet. * the Broadcom tag is stripped off the ingress packet.
* *
* Let dsa_slave_xmit() free the SKB * Let dsa_user_xmit() free the SKB
*/ */
if (__skb_put_padto(skb, ETH_ZLEN + BRCM_LEG_TAG_LEN, false)) if (__skb_put_padto(skb, ETH_ZLEN + BRCM_LEG_TAG_LEN, false))
return NULL; return NULL;
@ -264,7 +264,7 @@ static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
source_port = brcm_tag[5] & BRCM_LEG_PORT_ID; source_port = brcm_tag[5] & BRCM_LEG_PORT_ID;
skb->dev = dsa_master_find_slave(dev, 0, source_port); skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -129,7 +129,7 @@ enum dsa_code {
static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev, static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
u8 extra) u8 extra)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
struct net_device *br_dev; struct net_device *br_dev;
u8 tag_dev, tag_port; u8 tag_dev, tag_port;
enum dsa_cmd cmd; enum dsa_cmd cmd;
@ -267,14 +267,14 @@ static struct sk_buff *dsa_rcv_ll(struct sk_buff *skb, struct net_device *dev,
lag = dsa_lag_by_id(cpu_dp->dst, source_port + 1); lag = dsa_lag_by_id(cpu_dp->dst, source_port + 1);
skb->dev = lag ? lag->dev : NULL; skb->dev = lag ? lag->dev : NULL;
} else { } else {
skb->dev = dsa_master_find_slave(dev, source_device, skb->dev = dsa_conduit_find_user(dev, source_device,
source_port); source_port);
} }
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;
/* When using LAG offload, skb->dev is not a DSA slave interface, /* When using LAG offload, skb->dev is not a DSA user interface,
* so we cannot call dsa_default_offload_fwd_mark and we need to * so we cannot call dsa_default_offload_fwd_mark and we need to
* special-case it. * special-case it.
*/ */

View File

@ -61,7 +61,7 @@
static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb, static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u8 *gswip_tag; u8 *gswip_tag;
skb_push(skb, GSWIP_TX_HEADER_LEN); skb_push(skb, GSWIP_TX_HEADER_LEN);
@ -89,7 +89,7 @@ static struct sk_buff *gswip_tag_rcv(struct sk_buff *skb,
/* Get source port information */ /* Get source port information */
port = (gswip_tag[7] & GSWIP_RX_SPPID_MASK) >> GSWIP_RX_SPPID_SHIFT; port = (gswip_tag[7] & GSWIP_RX_SPPID_MASK) >> GSWIP_RX_SPPID_SHIFT;
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -20,7 +20,7 @@
static struct sk_buff *hellcreek_xmit(struct sk_buff *skb, static struct sk_buff *hellcreek_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u8 *tag; u8 *tag;
/* Calculate checksums (if required) before adding the trailer tag to /* Calculate checksums (if required) before adding the trailer tag to
@ -45,7 +45,7 @@ static struct sk_buff *hellcreek_rcv(struct sk_buff *skb,
u8 *tag = skb_tail_pointer(skb) - HELLCREEK_TAG_LEN; u8 *tag = skb_tail_pointer(skb) - HELLCREEK_TAG_LEN;
unsigned int port = tag[0] & 0x03; unsigned int port = tag[0] & 0x03;
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) { if (!skb->dev) {
netdev_warn_once(dev, "Failed to get source port: %d\n", port); netdev_warn_once(dev, "Failed to get source port: %d\n", port);
return NULL; return NULL;

View File

@ -87,7 +87,7 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
struct net_device *dev, struct net_device *dev,
unsigned int port, unsigned int len) unsigned int port, unsigned int len)
{ {
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;
@ -119,7 +119,7 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr; struct ethhdr *hdr;
u8 *tag; u8 *tag;
@ -256,7 +256,7 @@ static struct sk_buff *ksz_defer_xmit(struct dsa_port *dp, struct sk_buff *skb)
return NULL; return NULL;
kthread_init_work(&xmit_work->work, xmit_work_fn); kthread_init_work(&xmit_work->work, xmit_work_fn);
/* Increase refcount so the kfree_skb in dsa_slave_xmit /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet. * won't really free the packet.
*/ */
xmit_work->dp = dp; xmit_work->dp = dp;
@ -272,7 +272,7 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
{ {
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping); u8 prio = netdev_txq_to_tc(dev, queue_mapping);
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr; struct ethhdr *hdr;
__be16 *tag; __be16 *tag;
u16 val; u16 val;
@ -344,7 +344,7 @@ static struct sk_buff *ksz9893_xmit(struct sk_buff *skb,
{ {
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping); u8 prio = netdev_txq_to_tc(dev, queue_mapping);
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr; struct ethhdr *hdr;
u8 *tag; u8 *tag;
@ -410,7 +410,7 @@ static struct sk_buff *lan937x_xmit(struct sk_buff *skb,
{ {
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping); u8 prio = netdev_txq_to_tc(dev, queue_mapping);
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
const struct ethhdr *hdr = eth_hdr(skb); const struct ethhdr *hdr = eth_hdr(skb);
__be16 *tag; __be16 *tag;
u16 val; u16 val;

View File

@ -56,7 +56,7 @@ static int lan9303_xmit_use_arl(struct dsa_port *dp, u8 *dest_addr)
static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *lan9303_tag; __be16 *lan9303_tag;
u16 tag; u16 tag;
@ -99,7 +99,7 @@ static struct sk_buff *lan9303_rcv(struct sk_buff *skb, struct net_device *dev)
source_port = lan9303_tag1 & 0x3; source_port = lan9303_tag1 & 0x3;
skb->dev = dsa_master_find_slave(dev, 0, source_port); skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) { if (!skb->dev) {
dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid source port\n"); dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid source port\n");
return NULL; return NULL;

View File

@ -23,7 +23,7 @@
static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u8 xmit_tpid; u8 xmit_tpid;
u8 *mtk_tag; u8 *mtk_tag;
@ -85,7 +85,7 @@ static struct sk_buff *mtk_tag_rcv(struct sk_buff *skb, struct net_device *dev)
/* Get source port information */ /* Get source port information */
port = (hdr & MTK_HDR_RECV_SOURCE_PORT_MASK); port = (hdr & MTK_HDR_RECV_SOURCE_PORT_MASK);
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -12,8 +12,8 @@
#define NONE_NAME "none" #define NONE_NAME "none"
static struct sk_buff *dsa_slave_notag_xmit(struct sk_buff *skb, static struct sk_buff *dsa_user_notag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
/* Just return the original SKB */ /* Just return the original SKB */
return skb; return skb;
@ -22,7 +22,7 @@ static struct sk_buff *dsa_slave_notag_xmit(struct sk_buff *skb,
static const struct dsa_device_ops none_ops = { static const struct dsa_device_ops none_ops = {
.name = NONE_NAME, .name = NONE_NAME,
.proto = DSA_TAG_PROTO_NONE, .proto = DSA_TAG_PROTO_NONE,
.xmit = dsa_slave_notag_xmit, .xmit = dsa_user_notag_xmit,
}; };
module_dsa_tag_driver(none_ops); module_dsa_tag_driver(none_ops);

View File

@ -45,7 +45,7 @@ static void ocelot_xmit_get_vlan_info(struct sk_buff *skb, struct dsa_port *dp,
static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev, static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
__be32 ifh_prefix, void **ifh) __be32 ifh_prefix, void **ifh)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
u64 vlan_tci, tag_type; u64 vlan_tci, tag_type;
void *injection; void *injection;
@ -79,7 +79,7 @@ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
static struct sk_buff *ocelot_xmit(struct sk_buff *skb, static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
void *injection; void *injection;
ocelot_xmit_common(skb, netdev, cpu_to_be32(0x8880000a), &injection); ocelot_xmit_common(skb, netdev, cpu_to_be32(0x8880000a), &injection);
@ -91,7 +91,7 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
static struct sk_buff *seville_xmit(struct sk_buff *skb, static struct sk_buff *seville_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
void *injection; void *injection;
ocelot_xmit_common(skb, netdev, cpu_to_be32(0x88800005), &injection); ocelot_xmit_common(skb, netdev, cpu_to_be32(0x88800005), &injection);
@ -111,12 +111,12 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
u16 vlan_tpid; u16 vlan_tpid;
u64 rew_val; u64 rew_val;
/* Revert skb->data by the amount consumed by the DSA master, /* Revert skb->data by the amount consumed by the DSA conduit,
* so it points to the beginning of the frame. * so it points to the beginning of the frame.
*/ */
skb_push(skb, ETH_HLEN); skb_push(skb, ETH_HLEN);
/* We don't care about the short prefix, it is just for easy entrance /* We don't care about the short prefix, it is just for easy entrance
* into the DSA master's RX filter. Discard it now by moving it into * into the DSA conduit's RX filter. Discard it now by moving it into
* the headroom. * the headroom.
*/ */
skb_pull(skb, OCELOT_SHORT_PREFIX_LEN); skb_pull(skb, OCELOT_SHORT_PREFIX_LEN);
@ -141,12 +141,12 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
ocelot_xfh_get_vlan_tci(extraction, &vlan_tci); ocelot_xfh_get_vlan_tci(extraction, &vlan_tci);
ocelot_xfh_get_rew_val(extraction, &rew_val); ocelot_xfh_get_rew_val(extraction, &rew_val);
skb->dev = dsa_master_find_slave(netdev, 0, src_port); skb->dev = dsa_conduit_find_user(netdev, 0, src_port);
if (!skb->dev) if (!skb->dev)
/* The switch will reflect back some frames sent through /* The switch will reflect back some frames sent through
* sockets opened on the bare DSA master. These will come back * sockets opened on the bare DSA conduit. These will come back
* with src_port equal to the index of the CPU port, for which * with src_port equal to the index of the CPU port, for which
* there is no slave registered. So don't print any error * there is no user registered. So don't print any error
* message here (ignore and drop those frames). * message here (ignore and drop those frames).
*/ */
return NULL; return NULL;
@ -170,7 +170,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
* equal to the pvid of the ingress port and should not be used for * equal to the pvid of the ingress port and should not be used for
* processing. * processing.
*/ */
dp = dsa_slave_to_port(skb->dev); dp = dsa_user_to_port(skb->dev);
vlan_tpid = tag_type ? ETH_P_8021AD : ETH_P_8021Q; vlan_tpid = tag_type ? ETH_P_8021AD : ETH_P_8021Q;
if (dsa_port_is_vlan_filtering(dp) && if (dsa_port_is_vlan_filtering(dp) &&
@ -192,7 +192,7 @@ static const struct dsa_device_ops ocelot_netdev_ops = {
.xmit = ocelot_xmit, .xmit = ocelot_xmit,
.rcv = ocelot_rcv, .rcv = ocelot_rcv,
.needed_headroom = OCELOT_TOTAL_TAG_LEN, .needed_headroom = OCELOT_TOTAL_TAG_LEN,
.promisc_on_master = true, .promisc_on_conduit = true,
}; };
DSA_TAG_DRIVER(ocelot_netdev_ops); DSA_TAG_DRIVER(ocelot_netdev_ops);
@ -204,7 +204,7 @@ static const struct dsa_device_ops seville_netdev_ops = {
.xmit = seville_xmit, .xmit = seville_xmit,
.rcv = ocelot_rcv, .rcv = ocelot_rcv,
.needed_headroom = OCELOT_TOTAL_TAG_LEN, .needed_headroom = OCELOT_TOTAL_TAG_LEN,
.promisc_on_master = true, .promisc_on_conduit = true,
}; };
DSA_TAG_DRIVER(seville_netdev_ops); DSA_TAG_DRIVER(seville_netdev_ops);

View File

@ -37,8 +37,8 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
return NULL; return NULL;
/* PTP over IP packets need UDP checksumming. We may have inherited /* PTP over IP packets need UDP checksumming. We may have inherited
* NETIF_F_HW_CSUM from the DSA master, but these packets are not sent * NETIF_F_HW_CSUM from the DSA conduit, but these packets are not sent
* through the DSA master, so calculate the checksum here. * through the DSA conduit, so calculate the checksum here.
*/ */
if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb)) if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))
return NULL; return NULL;
@ -49,7 +49,7 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
/* Calls felix_port_deferred_xmit in felix.c */ /* Calls felix_port_deferred_xmit in felix.c */
kthread_init_work(&xmit_work->work, xmit_work_fn); kthread_init_work(&xmit_work->work, xmit_work_fn);
/* Increase refcount so the kfree_skb in dsa_slave_xmit /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet. * won't really free the packet.
*/ */
xmit_work->dp = dp; xmit_work->dp = dp;
@ -63,7 +63,7 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
static struct sk_buff *ocelot_xmit(struct sk_buff *skb, static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@ -83,7 +83,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
dsa_8021q_rcv(skb, &src_port, &switch_id, NULL); dsa_8021q_rcv(skb, &src_port, &switch_id, NULL);
skb->dev = dsa_master_find_slave(netdev, switch_id, src_port); skb->dev = dsa_conduit_find_user(netdev, switch_id, src_port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;
@ -130,7 +130,7 @@ static const struct dsa_device_ops ocelot_8021q_netdev_ops = {
.connect = ocelot_connect, .connect = ocelot_connect,
.disconnect = ocelot_disconnect, .disconnect = ocelot_disconnect,
.needed_headroom = VLAN_HLEN, .needed_headroom = VLAN_HLEN,
.promisc_on_master = true, .promisc_on_conduit = true,
}; };
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");

View File

@ -14,7 +14,7 @@
static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *phdr; __be16 *phdr;
u16 hdr; u16 hdr;
@ -78,7 +78,7 @@ static struct sk_buff *qca_tag_rcv(struct sk_buff *skb, struct net_device *dev)
/* Get source port information */ /* Get source port information */
port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, hdr); port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, hdr);
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;
@ -116,7 +116,7 @@ static const struct dsa_device_ops qca_netdev_ops = {
.xmit = qca_tag_xmit, .xmit = qca_tag_xmit,
.rcv = qca_tag_rcv, .rcv = qca_tag_rcv,
.needed_headroom = QCA_HDR_LEN, .needed_headroom = QCA_HDR_LEN,
.promisc_on_master = true, .promisc_on_conduit = true,
}; };
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

View File

@ -36,7 +36,7 @@
static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb, static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *p; __be16 *p;
u8 *tag; u8 *tag;
u16 out; u16 out;
@ -97,9 +97,9 @@ static struct sk_buff *rtl4a_tag_rcv(struct sk_buff *skb,
} }
port = protport & 0xff; port = protport & 0xff;
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) { if (!skb->dev) {
netdev_dbg(dev, "could not find slave for port %d\n", port); netdev_dbg(dev, "could not find user for port %d\n", port);
return NULL; return NULL;
} }

View File

@ -103,7 +103,7 @@
static void rtl8_4_write_tag(struct sk_buff *skb, struct net_device *dev, static void rtl8_4_write_tag(struct sk_buff *skb, struct net_device *dev,
void *tag) void *tag)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
__be16 tag16[RTL8_4_TAG_LEN / 2]; __be16 tag16[RTL8_4_TAG_LEN / 2];
/* Set Realtek EtherType */ /* Set Realtek EtherType */
@ -180,10 +180,10 @@ static int rtl8_4_read_tag(struct sk_buff *skb, struct net_device *dev,
/* Parse TX (switch->CPU) */ /* Parse TX (switch->CPU) */
port = FIELD_GET(RTL8_4_TX, ntohs(tag16[3])); port = FIELD_GET(RTL8_4_TX, ntohs(tag16[3]));
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) { if (!skb->dev) {
dev_warn_ratelimited(&dev->dev, dev_warn_ratelimited(&dev->dev,
"could not find slave for port %d\n", "could not find user for port %d\n",
port); port);
return -ENOENT; return -ENOENT;
} }

View File

@ -39,7 +39,7 @@ struct a5psw_tag {
static struct sk_buff *a5psw_tag_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *a5psw_tag_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
struct a5psw_tag *ptag; struct a5psw_tag *ptag;
u32 data2_val; u32 data2_val;
@ -90,7 +90,7 @@ static struct sk_buff *a5psw_tag_rcv(struct sk_buff *skb,
port = FIELD_GET(A5PSW_CTRL_DATA_PORT, ntohs(tag->ctrl_data)); port = FIELD_GET(A5PSW_CTRL_DATA_PORT, ntohs(tag->ctrl_data));
skb->dev = dsa_master_find_slave(dev, 0, port); skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -157,7 +157,7 @@ static struct sk_buff *sja1105_defer_xmit(struct dsa_port *dp,
return NULL; return NULL;
kthread_init_work(&xmit_work->work, xmit_work_fn); kthread_init_work(&xmit_work->work, xmit_work_fn);
/* Increase refcount so the kfree_skb in dsa_slave_xmit /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet. * won't really free the packet.
*/ */
xmit_work->dp = dp; xmit_work->dp = dp;
@ -210,7 +210,7 @@ static u16 sja1105_xmit_tpid(struct dsa_port *dp)
static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb, static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
unsigned int bridge_num = dsa_port_bridge_num_get(dp); unsigned int bridge_num = dsa_port_bridge_num_get(dp);
struct net_device *br = dsa_port_bridge_dev_get(dp); struct net_device *br = dsa_port_bridge_dev_get(dp);
u16 tx_vid; u16 tx_vid;
@ -235,7 +235,7 @@ static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb,
/* Transform untagged control packets into pvid-tagged control packets so that /* Transform untagged control packets into pvid-tagged control packets so that
* all packets sent by this tagger are VLAN-tagged and we can configure the * all packets sent by this tagger are VLAN-tagged and we can configure the
* switch to drop untagged packets coming from the DSA master. * switch to drop untagged packets coming from the DSA conduit.
*/ */
static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp, static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp,
struct sk_buff *skb, u8 pcp) struct sk_buff *skb, u8 pcp)
@ -266,7 +266,7 @@ static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp,
static struct sk_buff *sja1105_xmit(struct sk_buff *skb, static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@ -294,7 +294,7 @@ static struct sk_buff *sja1110_xmit(struct sk_buff *skb,
struct net_device *netdev) struct net_device *netdev)
{ {
struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone; struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone;
struct dsa_port *dp = dsa_slave_to_port(netdev); struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb); u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@ -383,7 +383,7 @@ static struct sk_buff
* Buffer it until we get its meta frame. * Buffer it until we get its meta frame.
*/ */
if (is_link_local) { if (is_link_local) {
struct dsa_port *dp = dsa_slave_to_port(skb->dev); struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct sja1105_tagger_private *priv; struct sja1105_tagger_private *priv;
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
@ -396,7 +396,7 @@ static struct sk_buff
if (priv->stampable_skb) { if (priv->stampable_skb) {
dev_err_ratelimited(ds->dev, dev_err_ratelimited(ds->dev,
"Expected meta frame, is %12llx " "Expected meta frame, is %12llx "
"in the DSA master multicast filter?\n", "in the DSA conduit multicast filter?\n",
SJA1105_META_DMAC); SJA1105_META_DMAC);
kfree_skb(priv->stampable_skb); kfree_skb(priv->stampable_skb);
} }
@ -417,7 +417,7 @@ static struct sk_buff
* frame, which serves no further purpose). * frame, which serves no further purpose).
*/ */
} else if (is_meta) { } else if (is_meta) {
struct dsa_port *dp = dsa_slave_to_port(skb->dev); struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct sja1105_tagger_private *priv; struct sja1105_tagger_private *priv;
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
struct sk_buff *stampable_skb; struct sk_buff *stampable_skb;
@ -550,7 +550,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
} }
if (source_port != -1 && switch_id != -1) if (source_port != -1 && switch_id != -1)
skb->dev = dsa_master_find_slave(netdev, switch_id, source_port); skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port);
else if (vbid >= 1) else if (vbid >= 1)
skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid); skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid);
else else
@ -573,16 +573,16 @@ static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
int switch_id = SJA1110_RX_HEADER_SWITCH_ID(rx_header); int switch_id = SJA1110_RX_HEADER_SWITCH_ID(rx_header);
int n_ts = SJA1110_RX_HEADER_N_TS(rx_header); int n_ts = SJA1110_RX_HEADER_N_TS(rx_header);
struct sja1105_tagger_data *tagger_data; struct sja1105_tagger_data *tagger_data;
struct net_device *master = skb->dev; struct net_device *conduit = skb->dev;
struct dsa_port *cpu_dp; struct dsa_port *cpu_dp;
struct dsa_switch *ds; struct dsa_switch *ds;
int i; int i;
cpu_dp = master->dsa_ptr; cpu_dp = conduit->dsa_ptr;
ds = dsa_switch_find(cpu_dp->dst->index, switch_id); ds = dsa_switch_find(cpu_dp->dst->index, switch_id);
if (!ds) { if (!ds) {
net_err_ratelimited("%s: cannot find switch id %d\n", net_err_ratelimited("%s: cannot find switch id %d\n",
master->name, switch_id); conduit->name, switch_id);
return NULL; return NULL;
} }
@ -649,7 +649,7 @@ static struct sk_buff *sja1110_rcv_inband_control_extension(struct sk_buff *skb,
/* skb->len counts from skb->data, while start_of_padding /* skb->len counts from skb->data, while start_of_padding
* counts from the destination MAC address. Right now skb->data * counts from the destination MAC address. Right now skb->data
* is still as set by the DSA master, so to trim away the * is still as set by the DSA conduit, so to trim away the
* padding and trailer we need to account for the fact that * padding and trailer we need to account for the fact that
* skb->data points to skb_mac_header(skb) + ETH_HLEN. * skb->data points to skb_mac_header(skb) + ETH_HLEN.
*/ */
@ -698,7 +698,7 @@ static struct sk_buff *sja1110_rcv(struct sk_buff *skb,
else if (source_port == -1 || switch_id == -1) else if (source_port == -1 || switch_id == -1)
skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid); skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
else else
skb->dev = dsa_master_find_slave(netdev, switch_id, source_port); skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port);
if (!skb->dev) { if (!skb->dev) {
netdev_warn(netdev, "Couldn't decode source port\n"); netdev_warn(netdev, "Couldn't decode source port\n");
return NULL; return NULL;
@ -778,7 +778,7 @@ static const struct dsa_device_ops sja1105_netdev_ops = {
.disconnect = sja1105_disconnect, .disconnect = sja1105_disconnect,
.needed_headroom = VLAN_HLEN, .needed_headroom = VLAN_HLEN,
.flow_dissect = sja1105_flow_dissect, .flow_dissect = sja1105_flow_dissect,
.promisc_on_master = true, .promisc_on_conduit = true,
}; };
DSA_TAG_DRIVER(sja1105_netdev_ops); DSA_TAG_DRIVER(sja1105_netdev_ops);

View File

@ -14,7 +14,7 @@
static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *dp = dsa_slave_to_port(dev); struct dsa_port *dp = dsa_user_to_port(dev);
u8 *trailer; u8 *trailer;
trailer = skb_put(skb, 4); trailer = skb_put(skb, 4);
@ -41,7 +41,7 @@ static struct sk_buff *trailer_rcv(struct sk_buff *skb, struct net_device *dev)
source_port = trailer[1] & 7; source_port = trailer[1] & 7;
skb->dev = dsa_master_find_slave(dev, 0, source_port); skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

View File

@ -13,7 +13,7 @@
static struct sk_buff *xrs700x_xmit(struct sk_buff *skb, struct net_device *dev) static struct sk_buff *xrs700x_xmit(struct sk_buff *skb, struct net_device *dev)
{ {
struct dsa_port *partner, *dp = dsa_slave_to_port(dev); struct dsa_port *partner, *dp = dsa_user_to_port(dev);
u8 *trailer; u8 *trailer;
trailer = skb_put(skb, 1); trailer = skb_put(skb, 1);
@ -39,7 +39,7 @@ static struct sk_buff *xrs700x_rcv(struct sk_buff *skb, struct net_device *dev)
if (source_port < 0) if (source_port < 0)
return NULL; return NULL;
skb->dev = dsa_master_find_slave(dev, 0, source_port); skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) if (!skb->dev)
return NULL; return NULL;

File diff suppressed because it is too large Load Diff

69
net/dsa/user.h Normal file
View File

@ -0,0 +1,69 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef __DSA_USER_H
#define __DSA_USER_H
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
#include <linux/list.h>
#include <linux/netpoll.h>
#include <linux/types.h>
#include <net/dsa.h>
#include <net/gro_cells.h>
struct net_device;
struct netlink_ext_ack;
extern struct notifier_block dsa_user_switchdev_notifier;
extern struct notifier_block dsa_user_switchdev_blocking_notifier;
struct dsa_user_priv {
/* Copy of CPU port xmit for faster access in user transmit hot path */
struct sk_buff * (*xmit)(struct sk_buff *skb,
struct net_device *dev);
struct gro_cells gcells;
/* DSA port data, such as switch, port index, etc. */
struct dsa_port *dp;
#ifdef CONFIG_NET_POLL_CONTROLLER
struct netpoll *netpoll;
#endif
/* TC context */
struct list_head mall_tc_list;
};
void dsa_user_mii_bus_init(struct dsa_switch *ds);
int dsa_user_create(struct dsa_port *dp);
void dsa_user_destroy(struct net_device *user_dev);
int dsa_user_suspend(struct net_device *user_dev);
int dsa_user_resume(struct net_device *user_dev);
int dsa_user_register_notifier(void);
void dsa_user_unregister_notifier(void);
void dsa_user_sync_ha(struct net_device *dev);
void dsa_user_unsync_ha(struct net_device *dev);
void dsa_user_setup_tagger(struct net_device *user);
int dsa_user_change_mtu(struct net_device *dev, int new_mtu);
int dsa_user_change_conduit(struct net_device *dev, struct net_device *conduit,
struct netlink_ext_ack *extack);
int dsa_user_manage_vlan_filtering(struct net_device *dev,
bool vlan_filtering);
static inline struct dsa_port *dsa_user_to_port(const struct net_device *dev)
{
struct dsa_user_priv *p = netdev_priv(dev);
return p->dp;
}
static inline struct net_device *
dsa_user_to_conduit(const struct net_device *dev)
{
struct dsa_port *dp = dsa_user_to_port(dev);
return dsa_port_to_conduit(dp);
}
#endif