Ryan Rafferty writes:
> On Wed, 2 Apr 1997, Systemkennung Linux wrote:
> > A good RAM interface is a much more effective mean to accelerate a
> > system than caches especially when you have an application that
> > has a working set that exceeds the cache size. The Magnum's RAM
> > interface may be slow by today's standards but it was very fast
> > at it's time.
> Ok, that's cool. But then why do computer manufacturers still incorporate
> cache into today's machines instead of using blazing memory interfaces?
Because making a 'blazing' memory subsystem is difficult. Very
difficult. Actually, I will claim that the memory subsystem is *the*
determining factor when it comes to system performance (when your
application touches a lot of memory - which is quite common).
Moore's Law has been used to make CPUs go really fast and DRAMs very
large. Thats a fact of life. If you read comp.arch, they will refer
to this problem as 'the memory wall' (The latency of the P6 "Orion"
450GX chipset is ~190 cpu clocks). All these new memory organizations
(SDRAM, RDRAM, SLDRAM, EDRAM, CDRAM, ad nauseum) are trying to reduce
Also, most computer projects have a set goal of price and performance.
In a previous life at Terma (Hi Theo, still around?) I designed an
embedded Xterminal, which used the R4600. We didn't use a cache, for
two reasons: price and board-space. Board space was critical; we have
100+ chips on a standard 6U VME board. Instead we used an interleaved
memory system, which achieved 100MByte/sec read/write performance with
a 33MHz bus. (No asics; just discrete registers and a large epld).
Kai Harrekilde-Petersen <firstname.lastname@example.org> #include <std/disclaimer.h>
http://www.dolphinics.no/~khp/ Linux: the choice of a GNU generation
"Argue for your limitations, and sure enough - they're yours" --Richard Bach.