On Thu, 31 Jan 2002, Alan Cox wrote:
> The x86 behaviour forced as I understand it is
> barrier() - compiler level store barrier
> rmb() - read barrier to bus/DMA level
> [no operation]
> wmb() - write barrier to bus/DMA level
> [synchronizing instruction sequence
> of locked add of 0 to stack top]
> (mb and wmb as names come from Alpha so I guess its definitive 8))
Well, after looking at the Alpha Architecture Handbook I see "mb" and
"wmb" are pure ordering barriers -- any transactions at the CPU bus (pins)
may still be deferred or prefetched (architecturally -- can't comment on
specific chips). So after all, maybe all the macros should be purely
"sync" for MIPS ("" for MIPS I, and mb() equal to wbflush() for R3220 and
similar setups) and anything that wants to see all writes actually
committed should use wbflush(), which would be defined as "mb();
uncached_read();" (or in a system-specific way, for R3220, etc.)?
The i386 implementation seems stronger than it should be, but that's
probably because of the limited choice available.
> It does not enforce PCI posting. Also your spurious interrupt case is
> wrong for other horrible reasons. Interrupt delivery must never be
> assumed to be synchronous in a portable driver. (In fact you'll see async
> irq delivery on an X86)
For interrupts arriving to an interrupt controller -- agreed.
But we don't generally expect a spurious interrupt from a line that was
already masked at the controller level. In other words mask_and_ack()
must undertake any means possible, to assure the addressed controller
received the new mask. If an interrupt passes by ocassionally anyway,
then it's not fatal, i.e. we can handle it, but it shouldn't be a rule
(i.e. receiving as many spurious interrupts as real ones). Am I right?
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+ e-mail: email@example.com, PGP key available +