[Top] [All Lists]

Re: 24k data cache, PIPT or VIPT?

To: COLin <>
Subject: Re: 24k data cache, PIPT or VIPT?
From: "Edgar E. Iglesias" <>
Date: Sun, 23 Jan 2011 05:34:39 +0100
Cc: "" <>, "" <>
In-reply-to: <>
Original-recipient: rfc822;
References: <>
User-agent: Mutt/1.5.20 (2009-06-14)
On Fri, Jan 21, 2011 at 09:52:54AM +0100, COLin wrote:
> Hi all,
> I found that there is this information while Linux is booting:
>     [Primary data cache 32kB, 4-way, PIPT, no aliases, linesize 32 bytes]
> I thought the latest MIPS CPUs all use VIPT. I didn't find anything about 
> PIPT on 24k Software User's Manual, either.
> The code related to this is here:
>         case CPU_24K:
>         case CPU_34K:
>         case CPU_74K:
>         case CPU_1004K:
>                 if ((read_c0_config7() & (1 << 16))) {
>                         /* effectively physically indexed dcache,
>                            thus no virtual aliases. */ 
>                         c->dcache.flags |= MIPS_CACHE_PINDEX;
>                         break;
> The 16's bit of config 7 register:
>     [Alias removed: This bit indicates that the data cache is organized to
> avoid virtual aliasing problems. This bit is only set if the data cache
> config and MMU type would normally cause aliasing - i.e., only for
> the 32KB and larger data cache and TLB-based MMU.]
> Does it imply that the CPU is using PIPT?


This line is confusing:
"This bit is only set if the data cache config and MMU type would normally 
cause aliasing"

because I don't know what they mean by "normally".

If you have a cache that is organized so that each way is
smaller or equal to the minimum MMU page size, then the cache rams
will be indexed by an offset taken from the page-offset, i.e the part
of the address that doesn't change when MMU translated.

It's a common trick to make it possible to speculatively read the
cache data and tag rams in parallel with MMU translation. Cache hit
detection is done late in the access cycle.

Because the index is unaffected by MMU translation, the VIPT cache
behaives like a PIPT cache. It avoids aliasing.

The drawback is that you have to organize the caches so that no tag
or data ram is larger than a page size.


<Prev in Thread] Current Thread [Next in Thread>