What's wrong with o32, n32 and n64?
Linux/MIPS (up to now, Autumn 2005) has made progress using recognisable dialects of standards defined originally by MIPS Corporation and SGI:
- o32 is for 32-bit CPUs, or 64-bit CPUs running only a 32-bit subset of the instruction set.
- n32/n64 are for 64-bit CPUs only. n64 has 64-bit pointers and long integers, whereas n32 has 32-bit pointers and long integers. (Other than that they're pretty much the same and will be considered together).
o32 is fairly different from n32/n64, so we'll pull out individual problems in separate sections.
What's wrong with all of them?
- Inefficient PIC code is resistant to optimisation.
- No general-purpose register available as a thread pointer for efficient thread-local storage.
- Not enough "saved" ("callee-saved") registers, limiting the compiler's ability to keep variables in registers.
- The calling and register conventions make it very hard to build gaskets to allow 32-bit code to intercall 64-bit code and vice versa. MIPS Technologies thinks such gaskets will be critical in allowing embedded systems builders to make the transition to using real 64-bit code.
What's wrong with o32?
o32 has been an orphan for a long time. Somewhere in the mid-1990s SGI dropped it completely, because all their systems had been using real 64-bit CPUs for some time.
- Committed to the obsolete MIPS I floating point model, which hides 16 of the FP registers.
- Only four argument registers, which means arguments are too often passed on the stack.
- Many recent improvements are undocumented: DWARF debug format, for example. So a great deal of undocumented folklore is required to build a compatible implementation. In practice, few implementations are compatible at debug level, and many are incompatible even for interlinking object code.
What's wrong with n32/n64?
- Non-SGI use is really an uneasy subset of n64. The real thing would probably break GNU tools. The adequate subset is undocumented. There are dozens of ill-documented object code section types etc.
- n32/n64 have only ever been Irix/Linux standards. There's no established "bare-iron" version of the object code.
- n64 as defined by SGI used their own (early, unique) extensions to DWARF debug information.
- n32/n64 are annoyingly different from each other for reasons which probably made sense at the time, but certainly don't now.
That last bullet point has led to some people asking:
What's wrong with dumping n32 (and things like it)?
By a "thing like n32" we mean a MIPS ABI which exploits real 64-bit hardware and instructions (so
long long is a hardware-supported type) but which uses 32-bit pointers.
SGI defined n32 when they discovered that a full 64-bit model broke many programs which had been reasonably portable between 32-bit architectures. It wasn't their big-ticket programs they were worried about, but the mass of small ones. In a sense, they need not have bothered: Linux is very careful to decouple the kernel and application ABIs, and you can run o32 applications on a 64-bit Linux kernel. But you ought to read and ponder the section above on what's wrong with o32...
In the workstation/server world, the main reason to want 64-bit applications was for more address space. In the embedded world, there is significant demand for 64-bits just for the larger registers.
Dumping n32 would make life easier, but keeping it seems kinder.