What's wrong with o32, n32 and n64?
Linux/MIPS (up to now, Autumn 2005) has made progress using recognisable dialects of standards defined originally by MIPS Corporation and SGI:
- o32 is for 32-bit CPUs, or 64-bit CPUs running only a 32-bit subset of the instruction set.
- n32/n64 are for 64-bit CPUs only. n64 has 64-bit pointers and long integers, whereas n32 has 32-bit pointers and long integers. (Other than that they're pretty much the same and will be considered together).
o32 is fairly different from n32/n64, so we'll pull out individual problems in separate sections.
What's wrong with all of them?
- Inefficient PIC code is resistant to optimisation.
- No general-purpose register available as a thread pointer for efficient thread-local storage.
- Not enough "saved" ("callee-saved") registers, limiting the compiler's ability to keep variables in registers.
- The calling and register conventions make it very hard to build gaskets to allow 32-bit code to intercall 64-bit code and vice versa. MIPS Technologies thinks such gaskets will be critical in allowing embedded systems builders to make the transition to using real 64-bit code.
What's wrong with o32?
o32 has been an orphan for a long time. Somewhere in the mid-1990s SGI dropped it completely, because all their systems had been using real 64-bit CPUs for some time.
- Committed to the obsolete MIPS I floating point model, which hides 16 of the FP registers.
- Only four argument registers, which means arguments are too often passed on the stack.
- Many recent improvements are undocumented: DWARF debug format, for example. So a great deal of undocumented folklore is required to build a compatible implementation. In practice, few implementations are compatible at debug level, and many are incompatible even for interlinking object code.
What's wrong with n32/n64?
- Non-SGI use is really an uneasy subset of n64. The real thing would probably break GNU tools. The adequate subset is undocumented. There are dozens of ill-documented object code section types etc.
- n32/n64 have only ever been Irix/Linux standards. There's no established "bare-iron" version of the object code.
- n64 as defined by SGI used their own (early, unique) extensions to DWARF debug information.
- n32/n64 are annoyingly different from each other for reasons which probably made sense at the time, but certainly don't now.
That last bullet point has led to some people asking:
Why do we need an "ILP32" ABI for 64-bit MIPS?
"ILP32" describes an ABI where the
long and pointer types are 32-bit quantities. o32 is already an ILP32 ABI, but n32 is an ILP32 ABI which exploits real 64-bit hardware and instructions (so
long long is a hardware-supported type). In the same notation, n64 is an I32LP64 ABI.
Many programs which had been reasonably portable between 32-bit architectures have bugs when moved away from ILP32. Most of these bugs probably aren't terribly deep or difficult, but there are thousands of programs, and many of them have no need of 64-bit pointers. Linux is very careful to decouple the kernel and application ABIs, and you can run o32 applications on a 64-bit Linux kernel - so it's arguable that you don't need to invent such a thing: if an application really needs 64-bit instructions, recompile it for the n64-like I32LP64 universe.
In the workstation/server world this almost always made sense, the main reason to want 64-bit applications was for more address space. In the embedded world, there is significant demand for 64-bits just for the larger registers.
Dumping n32 would make life easier, but keeping it seems kinder, particularly to those whose OS' are less willing than is Linux to support two different universes.