On 06/26/2013 10:50 AM, Ralf Baechle wrote:
On Wed, Jun 26, 2013 at 04:50:03PM +0000, Leonid Yegoshin wrote:
EVA has actually INCREASE in user address space - I right now run system with
2GB phys memory and 3GB of user virtual memory address space. Work in progress
is to verify that GLIBC accepts addresses above 2GB.
I took the 0x40000000 for a KSEG0-equivalent because you previously
mentioned the value of 0x80000000.
I wrote about kernel address layout. With EVA a user address layout is a
In EVA, user may have access, say [0x00000000 - 0xBFFFFFFF] through TLB and
kernel may have access, say [0x00000000 - 0xDFFFFFFF] unmapped. But
segment shifts are applied to each KSEG.
Yes, it is all about increasing phys and user memory and avoiding 64bits. Many
solutions dont justify 64bit chip (chip space increase, performance degradation
and increase in DMA addresses for devices).
Fair enough - but in the end the increasing size of metadata and pagetables
which has to reside in lowmem will become the next bottleneck and highmem
I/O performance has never been great, is on most kernel developers shit list
and performance optimizations for highmem are getting killed whenever they
are getting into the way.
EVA doesn't use HIGHMEM. Kernel has a direct access to all memory in,
say 3GB (3.5GB?).
Malta model gives only 2GB because of PCI bridge loop problem.
So I'd say EVA gives you something like 1.5GB of memory at most with good
performance and a 2GB userspace and something like 0.5GB, maybe 0.75GB
with a 3GB userspace. Beyond that you need highmem and that's where things,
especially kernel programming get more complicated and slower.
Ralf, PTE and page table sizes depends from page size and HUGE page
With EVA the ratio of usable and service (PTE + page table) memory is
the same as legacy MIPS
and independs from used user space size. Right now I am running SOAK
tests + additional
"thrash" instance for 1500MB on 2GB physical Malta memory and see:
Thrash v0.3 thrashing over 1500 megabytes
procs -----------memory---------- ---swap-- -----io---- --system--
r b swpd free buff cache si so bi bo in cs us sy
10 0 0 950480 252384 107856 0 0 1 18 166 132 75 25
See: swap si/so == 0.
I use 16KB pages.