Compressed instruction set

From Wikipedia, the free encyclopedia
(Redirected from Compressed instructions)

A compressed instruction set, or simply compressed instructions, are a variation on a microprocessor's instruction set architecture (ISA) that allows instructions to be represented in a more compact format. In most real-world examples, compressed instructions are 16 bits long in a processor that would otherwise use 32-bit instructions. The 16-bit ISA is a subset of the full 32-bit ISA, not a separate instruction set. The smaller format requires some tradeoffs: generally, there are fewer instructions available, and fewer processor registers can be used.

The concept was originally introduced by Hitachi as a way to improve the code density of their SuperH RISC processor design as it moved from 16-bit to 32-bit instructions in the SH-5 version. The new design had two instruction sets, one giving access to the entire ISA of the new design, and a smaller 16-bit set known as SHcompact that allowed programs to run in smaller amounts of main memory. As the memory of even the smallest systems is now orders of magnitude larger than the systems that spawned the concept, size is no longer the main concern. Today the advantage is that it reduces the number of accesses to main memory and thereby reduces energy use in mobile devices.

Hitachi's patents were licensed by Arm Ltd. for their processors, where it was known as "Thumb". Similar systems are found in MIPS16e and PowerPC VLE. The original patents have expired and the concept can be found in a number of modern designs, including RISC-V, which was designed from the outset to use it. The introduction of 64-bit computing has led to the term no longer being as widely used; these processors generally use 32-bit instructions and are technically a form of compressed ISA, but as they are mostly modified versions of an older ISA from a 32-bit version of the same processor family; there is no real compression.

Concept[edit]

Microprocessors encode their instructions as a series of bits, normally divided into a number of 8-bit bytes. For instance, in the MOS 6502, the ADC instruction performs binary addition between an operand value and the value already stored in the accumulator. There are a variety of places the processor might find the operand; it might be located in main memory, or in the special zero page, or be an explicit constant like "10". Each of these variations used a different 8-bit instruction, or opcode; if one wanted to add the constant 10 to the accumulator the instruction would be encoded in memory as $69 $0A, with $0A being hexadecimal for the decimal value 10. If it was instead adding the value stored in main memory at location $4400, it would be $6D $0044, with a little-endian address.[1]

Note that the second instruction requires three bytes because the memory address is 16 bits long. Depending on the instruction, it might use one, two, or three bytes.[1] This is now known as a variable length instruction set, although that term was not common at the time as most processors, including mainframes and minicomputers, normally used some variation of this concept. Even in the late 1970s, as microprocessors began to move from 8-bit formats to 16, this concept remained common; the Intel 8088 continued to use 8-bit opcodes which could be followed by zero to six additional bytes depending on the addressing mode.[2]

It was during the move to 32-bit systems, and especially as the RISC concept began to take over processor design, that variable length instructions began to go away. In the MIPS architecture, for instance, all instructions are a single 32-bit value, with a 6-bit opcode in the most significant bits and the remaining 26 bits used in various ways representing its limited set of addressing modes. Most RISC designs are similar. Moving to a fixed-length instruction format was one of the key design concepts behind the performance of early RISC designs; in earlier systems the instruction might take one to six memory cycles to read, requiring wiring between various parts of the logic to ensure the processor didn't attempt to perform the instruction before the data was ready. In RISC designs, operations normally take one cycle, greatly simplifying the decoding. The savings in these interlocking circuits is instead applied to additional logic or adding processor registers, which have a direct impact on performance.[3]

Code density[edit]

The downside to the RISC approach is that many instructions simply do not require four bytes. For instance, the Logical Shift Left instruction shifts the bits in a register to the left. In the 6502, which has only a single arithmetic register A, this instruction can be represented entirely by its 8-bit opcode $06.[1] On processors with more registers, all that is needed is the opcode and register number, another 4 or 5 bits. On MIPS, for instance, the instruction needs only a 6-bit opcode and a 5-bit register number. But as is the case for most RISC designs, the instruction still takes up a full 32 bits. As these sorts of instructions are relatively common, RISC programs generally take up more memory than the same program on a variable length processor.[4]

In the 1980s, when the RISC concept was first emerging, this was a common point of complaint. As the instructions took up more room, the system would have to spend more time reading instructions from memory. It was suggested these extra accesses might actually slow the program down. Extensive benchmarking eventually demonstrated RISC was faster in almost all cases, and this argument faded. However, there are cases where memory use remains a concern regardless of performance, and that is in small systems and embedded applications. Even in the early 2000s, the price of DRAM was enough that cost-sensitive devices had limited memory. It was for this market that Hitachi developed the SuperH design.[5]

In the earlier SuperH designs, SH-1 through SH-4, instructions always take up 16 bits. The resulting instruction set has real-world limitations; for instance, it can only perform two-operand math of the form A = A + B, whereas most processors of the era used the three-operand format, A = B + C. By removing one operand, four bits are removed from the instruction (there are 16 registers, needing 4 bits), although this is at the cost of making math code somewhat more complex to write. For the markets targeted by the SuperH, this was an easy tradeoff to make. A significant advantage of the 16-bit format is that the instruction cache now holds twice as many instructions for any given amount of SRAM. This allows the system to perform at higher speeds, although some of that might be mitigated by the use of additional instructions needed to perform operations that might be performed by a single 3-operand instruction.[6]

For the SH-5, Hitachi moved to a 32-bit instruction format. In order to provide backward compatibility with their earlier designs, they included a second instruction set, SHcompact. SHcompact mapped the original 16-bit instructions one-way onto the internal 32-bit instruction; it did not perform multiple instructions as would be the case in earlier microcoded processors, it was simply a smaller format for the same instruction. This allowed the original small-format programs to be easily ported to the new SH-5, while adding little to the complexity of the instruction decoder.[7]

ARM licensed a number of Hitachi's patents on aspects of the instruction design and used them to implement their Thumb instructions. ARM processors with a "T" in the name included this instruction set in addition to their original 32-bit versions, and could be switched from 32- to 16-bit mode on the fly using the BX command. When in Thumb mode, only the top eight registers of the ARM's normal sixteen registers are visible, but these are the same registers as in 32-bit mode and thus data can be passed between Thumb and normal code using those registers. Every Thumb instruction was a counterpart of a 32-bit version, so Thumb was a strict subset of the original ISA.[8] One key difference between ARM's model and SuperH is that Thumb retains some three-operand instructions in the 16-bit format, which it accomplished by reducing the visible register file to eight, so only 3 bits are required to select a register.[9]

The MIPS architecture also added a similar compressed set in their MIPS16e, which is very similar to Thumb. It too allows only eight registers to be used, although these are not simply the first eight; the MIPS design uses register 0 as the zero register, so registers 0 and 1 in 16-bit mode are instead mapped onto MIPS32 registers 16 and 17. Most other details of the system are similar to Thumb.[10] Likewise, the latest version of the Power ISA, formerly PowerPC, include the "VLE" instructions which are essentially identical. These were added at the behest of Freescale Semiconductor, whose interest in Power is mostly aimed at the embedded market.[11]

Modern use[edit]

Starting around 2015, many processors have moved to a 64-bit format. These generally retained a 32-bit instruction format, while expanding the internal registers to a 64-bit format. By the original definition, these are compressed instructions, as they are smaller than the basic data word size. However, this term is not used in this context; references to compressed instructions invariably refer to 16-bit versions.[12]

References[edit]

Citations[edit]

  1. ^ a b c Verts 2004.
  2. ^ "Understanding ARM Architectures". informIT. 23 August 2010.
  3. ^ Bacon, Jason. "MIPS Instruction Code Formats". Computer Science 315 Lecture Notes. Archived from the original on 2019-07-17. Retrieved 2021-04-09.
  4. ^ Weaver & McKee 2009.
  5. ^ "Effects of 16-bit instructions". Renesas.
  6. ^ SuperH 1996.
  7. ^ SH-5 CPU Core, Volume 1: Architecture (PDF). p. 8.
  8. ^ Lemieux 2004.
  9. ^ "Thumb instruction summary". ARM7TDMI Technical Reference Manual.
  10. ^ MIPS16e2 Application-Specific Extension Technical Reference Manual. MIPS. 26 April 2016.
  11. ^ Power ISA V2.07. IBM.
  12. ^ Alpha Architecture Handbook (PDF). DEC. October 1996. p. 1.4.

Bibliography[edit]