linux-mips
[Top] [All Lists]

[PATCH 07/16] mm: fix cache coloring on x86_64 architecture

To: Andrew Morton <akpm@linux-foundation.org>, Rik van Riel <riel@redhat.com>, Hugh Dickins <hughd@google.com>, linux-kernel@vger.kernel.org, Russell King <linux@arm.linux.org.uk>, Ralf Baechle <ralf@linux-mips.org>, Paul Mundt <lethal@linux-sh.org>, "David S. Miller" <davem@davemloft.net>, Chris Metcalf <cmetcalf@tilera.com>, x86@kernel.org, William Irwin <wli@holomorphy.com>
Subject: [PATCH 07/16] mm: fix cache coloring on x86_64 architecture
From: Michel Lespinasse <walken@google.com>
Date: Mon, 5 Nov 2012 14:47:04 -0800
Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-mips@linux-mips.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=EWmXgsnc9fQxBkhaU80PXECEJTZenWmKG/K/XM0LGgs=; b=GuZt5XWNiFx5hgiLEw9nu1n+JYgn2MJdx4q1AihE4XF8KPWFf4UeQg0UC8O+2qAAi5 mDG0ZfnA9bHSShZ+gvgxgi7X7lbhqoEINHpkenuyCpZnLWqkJlB3+LG+XCvmXlGyrSlT SDi5ADsdxpjpd4vhSJHr58tytohB5DvoFYiiawSBwoPJOZ9HydALbR2XFTe2aXE2wmaH r7GmE5bokGilDzITWsyF+aqPI3MJIGX/OzP+rpkgD8mwalHRpA0RhoBmHwoxMRuN89fJ CzHcP/3gT+QIiE2ffAh5hJ59Po2dNXI7bgFTc5ve0jaPtTCzdGnJXSn5dJw57/Hkc/aV q2Xg==
In-reply-to: <1352155633-8648-1-git-send-email-walken@google.com>
List-archive: <http://www.linux-mips.org/archives/linux-mips/>
List-help: <mailto:ecartis@linux-mips.org?Subject=help>
List-id: linux-mips <linux-mips.eddie.linux-mips.org>
List-owner: <mailto:ralf@linux-mips.org>
List-post: <mailto:linux-mips@linux-mips.org>
List-software: Ecartis version 1.0.0
List-subscribe: <mailto:ecartis@linux-mips.org?subject=subscribe%20linux-mips>
List-unsubscribe: <mailto:ecartis@linux-mips.org?subject=unsubscribe%20linux-mips>
References: <1352155633-8648-1-git-send-email-walken@google.com>
Sender: linux-mips-bounce@linux-mips.org
Fix the x86-64 cache alignment code to take pgoff into account.
Use the x86 and MIPS cache alignment code as the basis for a generic
cache alignment function.

The old x86 code will always align the mmap to aliasing boundaries,
even if the program mmaps the file with a non-zero pgoff.

If program A mmaps the file with pgoff 0, and program B mmaps the
file with pgoff 1. The old code would align the mmaps, resulting in
misaligned pages:

A:  0123
B:  123

After this patch, they are aligned so the pages line up:

A: 0123
B:  123

Signed-off-by: Michel Lespinasse <walken@google.com>
Proposed-by: Rik van Riel <riel@redhat.com>

---
 arch/x86/kernel/sys_x86_64.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index f00d006d60fd..97ef74b88e0f 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -136,7 +136,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long 
addr,
        info.low_limit = begin;
        info.high_limit = end;
        info.align_mask = filp ? get_align_mask() : 0;
-       info.align_offset = 0;
+       info.align_offset = pgoff << PAGE_SHIFT;
        return vm_unmapped_area(&info);
 }
 
@@ -175,7 +175,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const 
unsigned long addr0,
        info.low_limit = PAGE_SIZE;
        info.high_limit = mm->mmap_base;
        info.align_mask = filp ? get_align_mask() : 0;
-       info.align_offset = 0;
+       info.align_offset = pgoff << PAGE_SHIFT;
        addr = vm_unmapped_area(&info);
        if (!(addr & ~PAGE_MASK))
                return addr;
-- 
1.7.7.3

<Prev in Thread] Current Thread [Next in Thread>