linux-mips
[Top] [All Lists]

Re: SMTC Patches [0 of 3] (how about 4)

To: Linux MIPS Org <linux-mips@linux-mips.org>
Subject: Re: SMTC Patches [0 of 3] (how about 4)
From: "Kevin D. Kissell" <kevink@paralogos.com>
Date: Thu, 11 Sep 2008 16:12:08 +0200
In-reply-to: <48C6DC4C.5040208@paralogos.com>
Original-recipient: rfc822;linux-mips@linux-mips.org
References: <48C6DC4C.5040208@paralogos.com>
Sender: linux-mips-bounce@linux-mips.org
User-agent: Thunderbird 2.0.0.14 (X11/20080501)
As sometimes happens, after releasing the trio of patches the other night,
further testing showed that there was still some quirky FPU-affinity behavior
when run on an 34Kf.  Further investigation turned up some odd little holes,
which I've fixed, but the misbehavior was mostly due to the default number
of FP emulations to be performed before declaring a process "FP intensive",
which depends on loops_per_jiffy, being so low that "make" was being
declared to be an FPU-intensive program.  This, too, has been dealt with.
I'm going to follow  this message with a "Patch 4 of 3" message containig
a patch which is meant to be applied after the first 3 - it eliminates a part of patch 1 of 3. It's probably technically feasible to generate a patch that
replaces patch 1, but git and I get along poorly enough where I'd probably
just make a mess.

         Regards,

         Kevin K.

Kevin D. Kissell wrote:
I've managed to steal enough time to rework the SMTC support
for the MIPS 34K (and, I suppose 1004K) processors so that it
works again near the head of the source tree.  This involved
a complete rework of the clocking model to be compatible with
new common timing event system, which finally enables "tickless"
operation in SMTC, something it needed pretty badly.  I also
solved the problem with using the "wait_irqoff" idle loop
under SMTC.

There are going to be three patches that will follow.  The
first two are relatively localized fixes to problems with
FPU affinity and with IPI replay that I came across in testing
the new code.  The last is a pretty big patch, but it all
pretty much hangs together and I couldn't see any sensible
way to partition it.

    Regards,

    Kevin K.




<Prev in Thread] Current Thread [Next in Thread>