On Fri, 12 Jul 2002, Gleb O. Raiko wrote:
> > Well, you issue an instruction word read from the cache. The answer is
> > either a hit, providing a word at the data bus at the same time (so you
> > can't get a hit from one cache and data from the other) or a miss with no
> > valid data -- you have to stall in this case, waiting for a refill.
> Let it be miss and stall.
> > when data from the main memory arrives, it is latched in the cache (it
> > doesn't really matter, which one now -- if it's the wrong one, then
> > another refill will happen next time the memory address is dereferenced)
> > and provided to the CPU at the same time.
> At this time, CPU continues the execution of previous stalled
We don't care of previous instructions. The pipeline is stalled at the
intruction word fetch stage. Previously fetched instructions continue
being processed until they leave the pipeline.
> instruction. CPU knows the stalled instruction is in I-cache, but,
> unfortunately, caches have been swapped already. The same cacheline in
> the D-cache was valid bit set. CPU get data instead of code.
Well, I certainly understand what you mean, from the beginning, actually,
but I still can't see why this would happen for a real implementation.
When a cache miss happens an instruction word is read directly from the
main memory to the pipeline and a cache fill happens "accidentally".
What you describe, would require a CPU to query a cache status somehow
during a fill (what if another fill is in progress? -- a cache controller
may perform a fill of additional lines itself as it happens in certain
implementations) and then issue a second read when the fill completes.
That looks weird to me -- why would you design it this way?
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+ e-mail: email@example.com, PGP key available +