On Wed, 16 Dec 2009 14:30:36 -0800
David Daney <firstname.lastname@example.org> wrote:
> Chetan Loke wrote:
> >>> Does your hardware do flow-based queues? In this model you have
> >>> multiple rx queues and the hardware hashes incoming packets to a single
> >>> queue based on the addresses, ports, etc. This ensures that all the
> >>> packets of a single connection always get processed in the order they
> >>> arrived at the net device.
> >> Indeed, this is exactly what we have.
> >>> Typically in this model you have as many interrupts as queues
> >>> (presumably 16 in your case). Each queue is assigned an interrupt and
> >>> that interrupt is affined to a single core.
> >> Certainly this is one mode of operation that should be supported, but I
> >> would also like to be able to go for raw throughput and have as many cores
> >> as possible reading from a single queue (like I currently have).
> > Well, you could let the NIC firmware(f/w) handle this. The f/w would
> > know which interrupt was just injected recently.In other words it
> > would have a history of which CPU's would be available. So if some
> > previously interrupted CPU isn't making good progress then the
> > firmware should route the incoming response packets to a different
> > queue. This way some other CPU will pick it up.
> It isn's a NIC. There is no firmware. The system interrupt hardware is
> what it is and cannot be changed.
> My current implementation still has a single input queue configured and
> I get a maskable interrupt on a single CPU when packets are available.
> If the queue depth increases above a given threshold, I optionally send
> an IPI to another CPU to enable NAPI polling on that CPU.
> Currently I have a module parameter that controls the maximum number of
> CPUs that will have NAPI polling enabled.
> This allows me to get multiple CPUs doing receive processing without
> having to hack into the lower levels of the system's interrupt
> processing code to try to do interrupt steering. Since all the
> interrupt service routine was doing was call netif_rx_schedule(), I can
> simply do this via smp_call_function_single().
Better to look into receive packet steering patches that are still
under review (rather than reinventing it just for your driver)