linux-mips
[Top] [All Lists]

Re: [PATCH UPDATED 3/4] MIPS: mm: Use scratch for PGD when !CONFIG_MIPS_

To: Jayachandran C <jchandra@broadcom.com>
Subject: Re: [PATCH UPDATED 3/4] MIPS: mm: Use scratch for PGD when !CONFIG_MIPS_PGD_C0_CONTEXT
From: Ralf Baechle <ralf@linux-mips.org>
Date: Thu, 20 Jun 2013 17:19:49 +0200
Cc: linux-mips@linux-mips.org
In-reply-to: <1371559516-4862-1-git-send-email-jchandra@broadcom.com>
List-archive: <http://www.linux-mips.org/archives/linux-mips/>
List-help: <mailto:ecartis@linux-mips.org?Subject=help>
List-id: linux-mips <linux-mips.eddie.linux-mips.org>
List-owner: <mailto:ralf@linux-mips.org>
List-post: <mailto:linux-mips@linux-mips.org>
List-software: Ecartis version 1.0.0
List-subscribe: <mailto:ecartis@linux-mips.org?subject=subscribe%20linux-mips>
List-unsubscribe: <mailto:ecartis@linux-mips.org?subject=unsubscribe%20linux-mips>
Original-recipient: rfc822;linux-mips@linux-mips.org
References: <1370965298-29210-4-git-send-email-jchandra@broadcom.com> <1371559516-4862-1-git-send-email-jchandra@broadcom.com>
Sender: linux-mips-bounce@linux-mips.org
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Jun 18, 2013 at 06:15:15PM +0530, Jayachandran C wrote:

> Allow usage of scratch register for current pgd even when
> MIPS_PGD_C0_CONTEXT is not configured. MIPS_PGD_C0_CONTEXT is set
> for 64r2 platforms to indicate availability of Xcontext for saving
> cpuid, thus freeing Context to be used for saving PGD. This option
> was also tied to using a scratch register for storing PGD.
> 
> This commit will allow usage of scratch register to store the current
> pgd if one can be allocated for the platform, even when
> MIPS_PGD_C0_CONTEXT is not set. The cpuid will be kept in the CP0
> Context register in this case.
> 
> The code to store the current pgd for the TLB miss handler is now
> generated in all cases. When scratch register is available, the PGD
> is also stored in the scratch register.

I also pulled this one again as it was causing to others.

  Ralf

<Prev in Thread] Current Thread [Next in Thread>