|To:||Christoph Lameter <firstname.lastname@example.org>|
|Subject:||Re: sparsemem support for mips with highmem|
|From:||David VomLehn <email@example.com>|
|Date:||Tue, 19 Aug 2008 16:38:01 -0700|
|Authentication-results:||sj-dkim-3; header.Fromfirstname.lastname@example.org; dkim=pass ( sig from cisco.com/sjdkim3002 verified; );|
|Cc:||Randy Dunlap <email@example.com>, C Michael Sundius <Michael.firstname.lastname@example.org>, Dave Hansen <email@example.com>, Thomas Bogendoerfer <firstname.lastname@example.org>, email@example.com, firstname.lastname@example.org, email@example.com, Andy Whitcroft <firstname.lastname@example.org>|
|Dkim-signature:||v=1; a=rsa-sha256; q=dns/txt; l=1811; t=1219189201; x=1220053201; c=relaxed/simple; s=sjdkim3002; h=Content-Type:From:Subject:Content-Transfer-Encoding:MIME-Version; d=cisco.com; email@example.com; z=From:=20David=20VomLehn=20<firstname.lastname@example.org> |Subject:=20Re=3A=20sparsemem=20support=20for=20mips=20with =20highmem |Sender:=20; bh=UXuqUmf+il1NZ+BZtKDRhiAyRx/1S8fcPVS42tPhaik=; b=Tg/95VJWZDPdB8folyLejXwfApf5XYQ+SfYuWSiAqYByX8DqvPKHwkjA4L FA+70/ivEOyXjqlKmzcVEGjY3HUZFZNDta6k8e7iHs/HuhXKP++hbJ6iQsM2 0I+1Uv6k4l;|
|References:||<48A4AC39.email@example.com> <1218753308.23641.56.camel@nimitz> <48A4C542.firstname.lastname@example.org> <20080815080331.GA6689@alpha.franken.de> <1218815299.23641.80.camel@nimitz> <48A5AADE.email@example.com> <20080815163302.GA9846@alpha.franken.de> <48A5B9F1.firstname.lastname@example.org> <1218821875.23641.103.camel@nimitz> <48A5C831.email@example.com> <firstname.lastname@example.org> <48A9E89C.email@example.com> <48A9F047.firstname.lastname@example.org> <48AAC54D.email@example.com>|
|User-agent:||Thunderbird 18.104.22.168 (Windows/20080708)|
Christoph Lameter wrote:
David VomLehn wrote:On MIPS processors, the kernel runs in unmapped memory, i.e. the TLB isn't even used, so I don't think you can use that trick. So, this comment doesn't apply to all processors.In that case you have a choice between the overhead of sparsemem lookups in every pfn_to_page or using TLB entries to create a virtually mapped memmap which may create TLB pressure. The virtually mapped memmap results in smaller code and is typically more effective since the processor caches the TLB entries.
I'm pretty ignorant on this subject, but I think this is worth discussing. On a MIPS processor, access to low memory bypasses the TLB entirely. I think what you are suggesting is to use mapped addresses to make all of low memory virtually contiguous. On a MIPS processor, we could do this by allocating a "wired" TLB entry for each physically contiguous block of memory. Wired TLB entries are never replaced, so they are very efficient for long-lived mappings such as this. Using the TLB in this way does increase TLB pressure, but most platforms probably have a very small number of "holes" in their memory. So, this may be a small overhead.
If we took this approach, we could then have a single, simple memmap array where pfn_to_page looks just about the same as it looks with a flat memory model.
If I'm understand what you are suggesting correctly (a big if), the downside is that we'd pay the cost of a TLB match for each non-cached low memory data access. It seems to me that would be a higher cost than having the occasional, more expensive, sparsemem lookup in pfn_to_page.
Anyone with more in-depth MIPS processor architecture knowledge care to weigh in on this?
-- David VomLehn
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: [PATCH] Ignore vmlinux.lds generated files, Ralf Baechle|
|Next by Date:||Re: sparsemem support for mips with highmem, Jon Fraser|
|Previous by Thread:||Re: sparsemem support for mips with highmem, Christoph Lameter|
|Next by Thread:||Re: sparsemem support for mips with highmem, Jon Fraser|
|Indexes:||[Date] [Thread] [Top] [All Lists]|