On Mon, Jan 26, 2009 at 08:18:45PM -0600, Kevin Hickey wrote:
> Your example below is similar - debouncing the switch in hardware seems
> a better solution (albeit likely an expensive one) than patching the
> mainline kernel. And I reiterate: some devices send a lot of interrupts
> by design; we should honor their requests, not mask them out.
I agree in principle, but what's the point of honoring the requests if they
come in faster than the cpu can handle them? I think that's why the
handle_edge_irq() flowhandler masks the interrupt when another edge comes in
while the handler for the previous one is still running. This is also the
problem I'm running into: the second (and following) edges don't get acked
when the flowhandler tries to mask them, resulting in the irq storm. If I
explicitly ack it in the irq handler itself, all is well.
The current in-tree irq code bahaves differently than in <=2.6.28; this
patch restores this behaviour, and I believe it is the way the mask_ack()
callback is supposed to work. It affects only edge interrupts which come in
faster than the cpu can handle them; for all others there's no change (other
than 2 more stores in the mask fastpath).
(Or maybe it's a logic bug in handle_edge_irq(); I don't know.)