My colleague Earl writes:
> The desktop/server guys typically use much larger caches (i.e. >= 512K)
> and most have L2, compared to embedded systems which typically use less
> without an L2. So I'd also expect embedded guys using small caches to see
> larger decreases in performance due to more cache misses (i.e. more
> interrupts produce more evictions).
It's certainly true that running an interrupt routine (even one which
doesn't lead to any scheduling activity) will cause some cache
traffic. But it is important to keep the relative timescales in mind.
With an averagely bad memory system, evicting and replacing a cache
line costs 150ns read latency, with a 100ns writeback (notionally done
"in the background" just after the read) on about one miss in four...
Let's make some pessimistic assumptions. Suppose an interrupt routine
displaces 2KB of code and data and uses 32-byte cache lines, and we
assume the process happens twice as the background process refills the
cache to its liking. Then we'll get 120 odd reads, which will cost
about 20us, about 2% of total time on a 1KHz clock. That doesn't
sound like it should be a huge effect.
It would be better to measure it, though.