[Top] [All Lists]

Re: [PATCH v4 0/8] Avoid cache trashing on clearing huge/gigantic page

To: Andrew Morton <>
Subject: Re: [PATCH v4 0/8] Avoid cache trashing on clearing huge/gigantic page
From: Ingo Molnar <>
Date: Fri, 14 Sep 2012 07:52:10 +0200
Cc: "Kirill A. Shutemov" <>,, Thomas Gleixner <>, Ingo Molnar <>, "H. Peter Anvin" <>,, Andi Kleen <>, Tim Chen <>, Alex Shi <>, Jan Beulich <>, Robert Richter <>, Andy Lutomirski <>, Andrea Arcangeli <>, Johannes Weiner <>, Hugh Dickins <>, KAMEZAWA Hiroyuki <>, Mel Gorman <>,,,,,
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed;; s=20120113; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=57kZAxsiknS2yBA4t4cu4WY0PQZtEhBHxwZo45hIggE=; b=SJWZor8taemVr9WIEBCRJQGbEcBN+FQ2/J38H0mjlkXa/Tg3evjJs7u4yvbhtokUWF Jf4RW4IuFu6cyMGHV/sVbDWbfv8s/10uLWL7la32I9b0jDllm0RyoU4EnEfW0NbEzedW SGbfmhekl3wjvK8PCN9KTLrSt9fGbpdVHILqhZb2TInx+peIU46vrVsDmJFa3cbdi+4K YSE6zO4bPnPBELfLnw7Ft0g2kahYPE5hZRVHbSdCenT2N2bqWQ7QSBwOGPcwltNGuivj YUYBaJnqAwDTAdzb8myXT4ZeLypqASNZqg/34gN8XQhpBehz4VBgJwmdmYx5hS+Phhm1 01eQ==
In-reply-to: <>
List-archive: <>
List-help: <>
List-id: linux-mips <>
List-owner: <>
List-post: <>
List-software: Ecartis version 1.0.0
List-subscribe: <>
List-unsubscribe: <>
References: <> <>
User-agent: Mutt/1.5.21 (2010-09-15)
* Andrew Morton <> wrote:

> On Mon, 20 Aug 2012 16:52:29 +0300
> "Kirill A. Shutemov" <> wrote:
> > Clearing a 2MB huge page will typically blow away several levels of CPU
> > caches.  To avoid this only cache clear the 4K area around the fault
> > address and use a cache avoiding clears for the rest of the 2MB area.
> > 
> > This patchset implements cache avoiding version of clear_page only for
> > x86. If an architecture wants to provide cache avoiding version of
> > clear_page it should to define ARCH_HAS_USER_NOCACHE to 1 and implement
> > clear_page_nocache() and clear_user_highpage_nocache().
> Patchset looks nice to me, but the changelogs are terribly 
> short of performance measurements.  For this sort of change I 
> do think it is important that pretty exhaustive testing be 
> performed, and that the results (or a readable summary of 
> them) be shown.  And that testing should be designed to probe 
> for slowdowns, not just the speedups!

That is my general impression as well.

Firstly, doing before/after "perf stat --repeat 3 ..." runs 
showing a statistically significant effect on a workload that is 
expected to win from this, and on a workload expected to be 
hurting from this would go a long way towards convincing me.

Secondly, if you can find some user-space simulation of the 
intended positive (and negative) effects then a 'perf bench' 
testcase designed to show weakness of any such approach, running 
the very kernel assembly code in user-space would also be rather 


comet:~/tip> git grep x86 tools/perf/bench/ | grep inclu
tools/perf/bench/mem-memcpy-arch.h:#include "mem-memcpy-x86-64-asm-def.h"
tools/perf/bench/mem-memcpy.c:#include "mem-memcpy-x86-64-asm-def.h"
tools/perf/bench/mem-memset-arch.h:#include "mem-memset-x86-64-asm-def.h"
tools/perf/bench/mem-memset.c:#include "mem-memset-x86-64-asm-def.h"

that code uses the kernel-side assembly code and runs it in 

Although obviously clearing pages on page faults needs some care 
to properly simulate in user-space.

Without repeatable hard numbers such code just gets into the 
kernel and bitrots there as new CPU generations come in - a few 
years down the line the original decisions often degrade to pure 
noise. We've been there, we've done that, we don't want to 
repeat it.



<Prev in Thread] Current Thread [Next in Thread>