Up to now, Algorithmics' MIPS-based single-board computers have
offered a maximum of 256Mbytes DRAM. We can (and do) map all of that
to PCI at the beginning of time, so any PCI "bus master" device can
get to any part of system DRAM.
We're now working out how our system controller ("north bridge", more
or less) can handle much larger memories - we would like it to go on
past 4Gbytes. So now there aren't enough addresses on PCI to map all
We can see two options:
1. Just say it's too bad: PCI devices can only get at memory, say,
from 0-256Mbytes. We know that some PCs a while back couldn't DMA
above 16Mbytes, and we see that the kernel memory allocator has a
But this seems quite difficult to handle in a robust and efficient
way; for example:
- The virtual memory paging system presumably uses DMA into user
pages; it would need to choose instead to allocate an
intermediate buffer and copy data when the user page was not
DMA-able. Yuk. Or copy everything - double-yuk.
- Any system which is up for a long time with high memory demand
will risk deadlock if non-DMA requirements take too much
DMAable memory, or waste a lot of memory if they take too little.
2. Add some dynamic kind of translation so PCI devices can get to
the memory they need anywhere, and we have enough translation
resources to keep all pending-DMA devices happy.
But the hardware will be relatively complicated, and may need
special software routines to maintain it.
We (more specifically Chris) have looked at the kernel sources, and
concluded that schemes of both types have been attempted - though the
sources don't, of course, pass judgement on how well it worked.
Those of you with experience: which would you recommend? And if (2),
can you point us to descriptions of good hardware facilities you've
met or even imagined?
Algorithmics Ltd - http://www.algor.co.uk
The Fruit Farm, Ely Road, Chittering, CAMBS CB5 9PH, ENGLAND
phone: +44 1223 706200 / fax: +44 1223 706250 / direct: +44 1223 706205