linux-mips
[Top] [All Lists]

Re: [RFC] Flush huge TLB

To: Hillf Danton <dhillf@gmail.com>, Ralf Baechle <ralf@linux-mips.org>
Subject: Re: [RFC] Flush huge TLB
From: David Daney <david.daney@cavium.com>
Date: Mon, 10 Oct 2011 10:17:11 -0700
Cc: linux-mips@linux-mips.org, "Jayachandran C." <jayachandranc@netlogicmicro.com>
In-reply-to: <CAJd=RBBPd6frOA5zCg5juHuWdZ6wHzmOhhufgGhUCOc=rkNEzA@mail.gmail.com>
References: <CAJd=RBBPd6frOA5zCg5juHuWdZ6wHzmOhhufgGhUCOc=rkNEzA@mail.gmail.com>
Sender: linux-mips-bounce@linux-mips.org
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Fedora/3.0.10-1.fc12 Thunderbird/3.0.10
On 10/09/2011 05:53 AM, Hillf Danton wrote:
When flushing TLB, if @vma is backed by huge page, huge TLB should be flushed,
due to the fact that huge page is defined to be far from normal page, and the
flushing is shorten a bit.

Any comment is welcome.


Note that the current implementation works, but is not optimal.

Thanks

Signed-off-by: Hillf Danton<dhillf@gmail.com>
---

--- a/arch/mips/mm/tlb-r4k.c    Mon May 30 21:17:04 2011
+++ b/arch/mips/mm/tlb-r4k.c    Sun Oct  9 20:50:06 2011
@@ -120,22 +120,35 @@ void local_flush_tlb_range(struct vm_are

        if (cpu_context(cpu, mm) != 0) {
                unsigned long size, flags;
+               int huge = is_vm_hugetlb_page(vma);

                ENTER_CRITICAL(flags);
-               size = (end - start + (PAGE_SIZE - 1))>>  PAGE_SHIFT;
-               size = (size + 1)>>  1;
+               if (huge) {
+                       size = (end - start) / HPAGE_SIZE;
> +          } else {
> +                  size = (end - start + (PAGE_SIZE - 1))>>  PAGE_SHIFT;
> +                  size = (size + 1)>>  1;
> +          }

Perhaps:
        if (huge) {
                start = round_down(start, HPAGE_SIZE);
                end = round_up(start, HPAGE_SIZE);
                size = (end - start) >> HPAGE_SHIFT;
        } else {
                start = round_down(start, PAGE_SIZE << 1);
                end = round_up(start, PAGE_SIZE << 1);
                size = (end - start) >> (PAGE_SHIFT + 1);
        }
.
.
.

                if (size<= current_cpu_data.tlbsize/2) {

Has anybody benchmarked this heuristic?  I guess it seems reasonable.

                        int oldpid = read_c0_entryhi();
                        int newpid = cpu_asid(cpu, mm);

-                       start&= (PAGE_MASK<<  1);
-                       end += ((PAGE_SIZE<<  1) - 1);
-                       end&= (PAGE_MASK<<  1);
+                       if (huge) {
+                               start&= HPAGE_MASK;
+                               end&= HPAGE_MASK;
+                       } else {
+                               start&= (PAGE_MASK<<  1);
+                               end += ((PAGE_SIZE<<  1) - 1);
+                               end&= (PAGE_MASK<<  1);
+                       }

This stuff is done above so is removed.


                        while (start<  end) {
                                int idx;

                                write_c0_entryhi(start | newpid);
-                               start += (PAGE_SIZE<<  1);
+                               if (huge)
+                                       start += HPAGE_SIZE;
+                               else
+                                       start += (PAGE_SIZE<<  1);
                                mtc0_tlbw_hazard();
                                tlb_probe();
                                tlb_probe_hazard();



If we do something like that, then...

Acked-by: David Daney <david.daney@cavium.com>

<Prev in Thread] Current Thread [Next in Thread>