On Sat, Feb 19, 2000 at 03:03:16PM +0100, Matthias Heidbrink wrote:
> > > It's 64 because it's not possible to program the DECstation RTC to 100Hz
> > > or 1024Hz.
> > Well, 1024 Hz would be possible but certainly total overkill for a
> > DECstation 2100 with a 12.5 Mhz R2000 :-)
> I remember that problems with these "odd" 64 Hz have been discussed on this
> list several times before. Maybe it would not be such a bad idea to go to a
> "standard" value.
> How expensive is the interrupt handler for the timer, I mean, how large
> would the loss in performance be when going to 1024 Hz?
> Or would it make sense to modify the clock setup and interrupt handler to run
> the clock with 1024 Hz, but run the complete timer interrupt handling code
> only every 10 ms/at 100Hz (in average)?
> I used this method some years ago under DOS because my application required
> a fine timer resolution and DOS required a timer interrupt every 55ms.
> On a 386-33 (which should be roughly equivalent in performance to an old
> MIPS CPU with a bit less than half of the clock speed)
There was a reason that at that time /etc/motd as shipped by MIPS in
Risc/OS did say ``MIPS - the messure of performance'' :-)
> it showed no remarkable loss in performence. Even quite the opposite
> because the application that had some delays in it ran smoother because of
> the finer granularity of the delays.
Assuming we special case this the fast path and can get away with about 30
instructions which execute all from the cache then that's going to be
an additional 26880 cycles per second for interrupt handling. For a
12.5MHz machine that's 0.2%. That's still too optimistic as it ignores
the cache effects.
It's our current approach to convert all time information where it is
visible in kernel interfaces in units of ticks, that is 1/HZ. That's a
relativly small number of places and will only punish the caller but
not result in a permanent performance penalty.