lkml.org 
[lkml]   [2019]   [Aug]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 1/2] net: core: Notify on changes to dev->promiscuity.
On Sat, Aug 31, 2019 at 09:35:56PM +0200, Andrew Lunn wrote:
> > Also, what happens when I'm running these application without putting
> > the interface in promisc mode? On an offloaded interface I would not be
> > able to even capture packets addressed to my interface's MAC address.
>
> Sorry for rejoining the discussion late. I've been travelling and i'm
> now 3/4 of the way to Lisbon.

Hi Andrew,

Have fun!

> That statement i don't get.

What about the other statements?

> If the frame has the MAC address of the interface, it has to be
> delivered to the CPU.

So every packet that needs to be routed should be delivered to the CPU?
Definitely not.

> And so pcap will see it when running on the interface. I can pretty
> much guarantee every DSA driver does that.

I assume because you currently only consider L2 forwarding.

> But to address the bigger picture. My understanding is that we want to
> model offloading as a mechanism to accelerate what Linux can already
> do. The user should not have to care about these accelerators. The
> interface should work like a normal Linux interface. I can put an IP
> address on it and ping a peer. I can run a dhcp client and get an IP
> address from a dhcp server. I can add the interface to a bridge, and
> packets will get bridged. I as a user should not need to care if this
> is done in software, or accelerated by offloading it. I can add a
> route, and if the accelerate knows about L3, it can accelerate that as
> well. If not, the kernel will route it.

Yep, and this is how it's all working today.

> So if i run wireshark on an interface, i expect the interface will be
> put into promisc mode and i see all packets ingressing the interface.
> What the accelerator needs to do to achieve this, i as a user don't
> care.
>
> I can follow the argument that i won't necessarily see every
> packet. But that is always true. For many embedded systems, the CPU is
> too slow to receive at line rate, even when we are talking about 1G
> links. Packets do get dropped. And i hope tcpdump users understand
> that.
>
> For me, having tcpdump use tc trap is just wrong. It breaks the model
> that the user should not care about the accelerator. If anything, i
> think the driver needs to translate cBPF which pcap passes to the
> kernel to whatever internal format the accelerator can process. That
> is just another example of using hardware acceleration.

Look, this again boils down to what promisc mode means with regards to
hardware offload. You want it to mean punt all traffic to the CPU? Fine.
Does not seem like anyone will be switching sides anyway, so lets move
forward. But the current approach is not good. Each driver needs to have
this special case logic and the semantics of promisc mode change not
only with regards to the value of the promisc counter, but also with
regards to the interface's uppers. This is highly fragile and confusing.

The approach taken in v2 makes much more sense. Add a new flag to
accelerators and have the networking stack check it before putting the
interface in promisc mode. Then the only thing drivers need to do is to
instruct the accelerator to trap all traffic to the CPU.

\
 
 \ /
  Last update: 2019-08-31 22:48    [W:0.115 / U:1.804 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site