linux-mips
[Top] [All Lists]

Re: How to detect STACKOVEFLOW on mips

To: Adam Jiang <jiang.adam@gmail.com>
Subject: Re: How to detect STACKOVEFLOW on mips
From: Ralf Baechle <ralf@linux-mips.org>
Date: Wed, 30 Jun 2010 15:50:06 +0100
Cc: linux-mips@linux-mips.org
In-reply-to: <AANLkTimL7YMyb2ahmTgl8dqV_DNfsROjDhLEDm4jyVWE@mail.gmail.com>
References: <AANLkTimL7YMyb2ahmTgl8dqV_DNfsROjDhLEDm4jyVWE@mail.gmail.com>
Sender: linux-mips-bounce@linux-mips.org
User-agent: Mutt/1.5.20 (2009-08-17)
On Wed, Jun 30, 2010 at 02:59:42PM +0900, Adam Jiang wrote:

> I'm having a problem with kernel mode stack on my box. It seems that
> STACKOVERFLOW happened to Linux kernel. However, I can't prove it
> because the lack of any detection in __do_IRQ() function just like on
> the other architectures. If you know something about, please help me
> on following two questions.
> - Is there any possible to do this on MIPS?
> - or, more simple question, how could I get the address $sp pointed by
> asm() notation in C?

Due to the large register frame on MIPS the stack is 8kB on 32-bit, 16kB
on 64-bit or PAGE_SIZE, whatever is larger.  This is should be hard to
overflow by accident unless doing something outrageously stupid.

To access the stackpointer include <linux/thread_info.h>.  The function
current_thread_info() will return the pointer to the struct thread_info
of the current thread.  This structure is located at the bottom of the
stack.  With something like

  register void *stackp __asm__("$29");

you then can access the stack pointer as the stackp variable.  You
obviously need to maintain the relation

  current_thread_info() + 1 < stackp

at all times - and you better have a bit of extra space available just for
peace of mind.

There used to be some code for other architectures that zeros the stack
page and counts how much of that has been overwritten by the stack.  That
was never ported to MIPS.

Another helper to find functions that do excessive static allocations is
"make checkstack".

  Ralf

<Prev in Thread] Current Thread [Next in Thread>