Talk:Intel 8080

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The meaning of 8008[edit]

how is the microprocessor named?why is '8080' called so and not something else like 'a','mpl'etc.

According to Intel 4004 the names of the four-bit processors where thought up by Federico Faggin.
The numbering scheme in use by Intel at the time was tfnn: technology (1 = P-channel MOS), function (1 = RAM, 2 = logic, 3 = ROM, 5 = shift) and number. This would leave the MCS-4 family he designed with radically different numbers, so he went for:
  • 4001: ROM
  • 4002: RAM
  • 4003: shift
  • 4004: CPU
When Intel developed an 8-bit CPU under contract, and it wasn't put to its intended use, they renamed it for the general market to 8008. I think the success of the 4004 may have had something to do with this. However, there was no real 8000 family of components designed to work together (as far as I know) and the 8 is not a component number in such a family (a practice that had already been abandoned for the 4008, an 8-bit address latch in the 4000 family).
So why did they name its successor 8080, instead of 8009 say? I don't know. Possibly because the final 8 also signified its bittage? Maybe they chose 8080 to signify their intention of starting a line of 8-bit CPUs, but your guess is as good as mine.
8080 to 8086 are all CPUs as far as I know. The 8087 was floating point co-processor for the 8086. The 8088 is actually a downgraded 8086.
Then, with the introduction of the 80186 (and its downgrade, the 80188) all sanity was lost again. My best guess is that the family had sort of split in three related ones, with 86 = CPU, 87 = FPU, 88 = 8-bit downgrade. But why they moved to 80186 rather than 8186 I don't know. Marketing reasons?
Don't forget to sign your posts. The numbering is similar to the 7400 TTL family that went from 7499 to 74100. Gah4 (talk) 14:53, 15 December 2017 (UTC)[reply]
That's a really bad comparison, though. Intel still had 8089 thru 8099 unused, and most of the other possible code points in the 80xx range (804x and maybe 805x had been claimed by microcontroller families, and possibly 800x was off limits to avoid confusion, but everything else, as well as 8xnn where x = anything other than 0, including letters, was still available) without any need to add extra numbers, other than some odd desire to both keep increasing the numbers with no backtracking to lower values, AND retain "80" followed by an arbitrary-length string of numbers as a stylised recognisable family name. Until that point they'd only really used a few numbers within the 8xxx range, a very few within the 4xxx, and various codes in the 1xxx and 2xxx ranges for their semiconductor memory products, there were loads of remaining choices in the four-digit realm.
Whereas even by the late 60s (with direct reference to a 1967-68 season order book), TI had a huge family of discrete-logic ICs available, that had started out with the 15, 52 and 72 series (plus 35, 62?) and code numbers within those ranges starting at 0, then expanded through the 53/73 (these plus the 52/72 being Diode-Transistor logic; the lower numbers were simpler "linear" ie analogue devices), 54/74 (64?), 55/75 (both TTL) and eventually 76 series (soundchips), with the code numbers extending first to 00 when 0-9 were exhausted, and then gaining additional digits (this time prefixed with 1 to reduce confusion) once they'd eventually hit 99. Which was necessary with e.g. the 73 and 74 series, because there already existed further families with the numbers they'd otherwise roll over into, a problem that Intel didn't suffer. Even in the late 60s the sub-codes were well up into the xx50 range; by 1985 they were past xx600. These days, there's a couple thousand members of the 74xx family alone and you can find examples with four and even five digits, plus additional letter codes, not just two and three.
TI were absolutely forced to add extra digits into the code numbers because they had nowhere to go on hitting 5299, 7399, 7499 etc. It's simply not a problem Intel shared, they could totally have moved from the 80xx family to an 81xx one if they'd wanted to. Even with the 8018x and 8028x launching simultaneously, which would have caused considerable cramping in the 80xx range if they'd tried to fit them into four digits without incrementing the second, it could still have been done. One could have been the 8186 and another the 8286. Or, as the 186 was a quasi-microcontroller type design intended more for embedded applications than general purpose desktop computers, the 286 could have been the 8096, and the 186 the 8066 or 8076, or even just 8016. Or they could have gone alphanumeric or hexadecimal - 80A6, 80B6, 808A/B, etc. Chucking the extra digit in there is still a weird choice (...and, particularly, a right pain in the backside if you're trying to produce an ordered list of CPUs ... though, maybe there's the genius in their madness? It would after all have made their new chips "pop" to the top of any list of 80-series chips, topped only by the 8008, because to a computer "80186" and "80286" are both "before" 8086... the idea breaks down when you add the 80386 etc, but they weren't committed to direct 32-bit successors to the 8086 at the time and were putting a lot of their stock into alternative architectures like the iAPX-432 and so-on, which were expected to be the 286's actual successors). 51.7.49.27 (talk) 18:52, 18 September 2018 (UTC)[reply]
Along with the 8080, there is a series of 82xx chips, extended for the 8086. The 8224 is the clock generator for the 8080, as it needs higher voltages than normal TTL levels. The 8284 is, then, the clock generator for the 8086. There might be some 81xx too, but I don't remember any right now. Also, there is some convenience in reducing overlap between manufacturers. It would be very confusing if Intel made 68xx chips unrelated to the Motorola chips, for example. (Though there is a 4040 in the 4xxx CMOS series.) As well as I know, the 4004, 4040, 8008, and 8080 were all designed with the intention of embedded use. That is, not as the center of a general purpose computer system. It happens that the 8080 isn't bad for non-embedded use. The 6502, for a variety of reasons, is better for embedded use, and I suspect caused a lot of wasted coding time. There is also the Intel series of EPROMs, from the 2708 on up, in 27xx and 27xxx numbering. Gah4 (talk) 00:18, 19 September 2018 (UTC)[reply]
Yeah, that's a good point actually. I was already coming back to issue a stack of mea culpas having done some further reading around and come across a big ol' list of Intel parts (...it's just I found the TI list rather earlier). I don't know *when* they were introduced, but there are certainly a whole pile of 81xx, 82xx etc support chips as well as endless other things in the 80xx range (and really, I should have known about e.g. the 8224 as I've run across it enough times before), and certainly some of them would have popped up in e.g. the IBM PC whilst the 186 and 286 projects were still in their infancy. The structure on the whole doesn't seem particularly well organised or logical, so it's possible their sticking with 8086 as the signifier for "mainstream CPU" and just shoving numbers in the middle of it was an attempt to impose some order on the chaos. 146.199.0.170 (talk) 19:44, 19 September 2018 (UTC)[reply]

I was recently told about the NEC V20, which had an 8080 compatible mode. Should it be mentioned in this article? --StuartBrady (Talk) 17:28, 17 April 2007 (UTC)[reply]

Probably not. That was marketed much more as a drop-in replacement for the 8088. The 8080 mode was probably just a gimmicky "might as well include it" extra to make it an option for upgrading 8080 machines as well as 8088 ones (both chips used the same package, though with slightly different pinouts) and so increase sales. But by the time NEC's chip was on sale, the 8080 was pretty old news - the 8086/8088 was the new game in town, and anyone who'd wanted more power from their system would probably have installed an 8085 instead already. I've heard many examples of people upgrading PCs and Tandys with V20s, and even some OEMs installing them by default, but never a squeak about it being used in place of the 8080 (or 8085). 51.7.49.27 (talk) 19:00, 18 September 2018 (UTC)[reply]

"The 8080 supported up to 256 input/output (I/O) ports"[edit]

Is this true? The z80 is also officially documented as having 256 input/output ports, but always loads the entire 16-bit address bus according to the usual rules of 16-bit register pairing (e.g. "OUT (C), A" actually outputs A to the port BC) so actually supports 65536 input/output ports in all versions. As the 8080 also has a 16-bit bus, it has to get a high address byte from somewhere, is it anywhere deterministic? — 87.194.58.250 (talk) 22:53, 3 July 2008 (UTC)[reply]

I belive the 8080, unlike the Z80, asserted both the lower and the upper part of the address bus with the same 8-bit I/O-address; have to check the data-sheet (have it somewhere) to be completely sure, however. HenkeB (talk) 17:52, 17 August 2008 (UTC)[reply]

Pins[edit]

The pin 17 is DBIN and not RD, Pin 21 is HLDA, WR is WR# (low active) and so on... Look at: http://datasheets.chipdb.org/Intel/MCS-80/intel-8080.pdf - 77.20.250.82 (talk) 02:51, 14 November 2008 (UTC)[reply]

Checked datasheets ([1] and [2]) and corrected article. I think this information (pin functions) is not needed in the article. See for analogy Intel_4004, Intel_4040, Intel_8008, Intel_8085 and Intel_8086.82.193.140.165 (talk) 07:07, 17 October 2016 (UTC)[reply]

ISA[edit]

The 8080 implemented the "pre x86" ISA? The article should state what ISA it implements, not some broad generalization. Rilak (talk) 17:16, 21 November 2008 (UTC)[reply]

Presumably the quasi standard shared by the 8080, 8085, and Z80. It was quite widespread in the S100-bus, generally CP/M-using first-generation micros of the mid 1970s. However, it would probably be a case of mistaken precedence to say that the i8080 "implemented" that standard architecture... rather more accurate to say it gave rise to it. S100 basically developed from a need to gang all the signals for an i8080 based multi-board computer (including all 40 of the CPU's external lines) through a commodity bus connector, which happened to be a mil-spec 100-conductor board-edge slot because that's what was most cheaply available. It wasn't anything Intel came up with, specified, or had in mind whilst designing the chip. The other two CPUs happen to use the same general pinout for convenience, and so saw widespread use in S100 designs that therefore needed little or no real modification (a few minor things like implementing the Z80's NMI were sometimes bothered with) to take advantage of their improved performance. However, it certainly wasn't an 8080/workalike exclusive backplane ISA - about a dozen different CPUs, including some with extremely different design philosophies and signal architectures (e.g. the Motorola 68000) were ultimately adapted to work with it. 51.7.49.27 (talk) 19:08, 18 September 2018 (UTC)[reply]

"Most widely used CPU"[edit]

A reverted "peacock" contribution claimed that the 8080 was the most widely used CPU in history (somehow proved by the fact that it was used in a space vehicle...) - I think the ARM is a more likely candidate, being in all of our phones and things. :Destynova (talk) 11:37, 9 October 2009 (UTC)[reply]

In all likelihood, the 6502 family is the most widely used MPU, as hundreds of millions of them are shipped every year in embedded applications, including implanted defibrillators. It's a hard thing to quantify, since exact numbers are difficult to obtain.
Eh, IDK... what microcontroller family is that used in, then? If we're going down to the level of embedded processors, then surely Intel and Motorola's own uC families are in with a strong shout, as are PIC, Atmel/Microchip Technology (between the three of them, responsible for the cores used in the Arduino family), and indeed ARM (or more accurately, the various companies that produce physical chips with their processing core IP). All could be conceivably called microprocessors, just ones that happen to have integrated program and working memory plus IO ports that can directly interface with external hardware. The 6502 is a very widely used thing, but simply happening to appear in a semi-smart defibrillator isn't that strong a claim for still being used in its millions, especially as its more common use was in more fully featured computers, or at least apps where a trad uC wouldn't quite cut it, and there's a good number of uC's now that outperform it in pretty much every area (performance, size, power use, cost...) and the balance of programmers that are more familiar with coding for those rather than the MOS must surely be shifting quite rapidly. All up it's probably more likely that it's simply impossible to be sure any more, unless we can somehow get hold of the actual production numbers from every IC manufacturer worthy of mention and some kind of worthwhile stats regarding how many of each chip family (or the machines they were integrated to) ever put into service are likely to still be in use.
(EDIT: about the only real support I've seen for that claim is in the lede of the 6502 page itself, where it's uncited as well as unsupported anywhere else in the article text. I wonder if our erstwhile correspondant was the author of that section as well, or merely parroting it?)

51.7.49.27 (talk) 19:18, 18 September 2018 (UTC)[reply]

As for asserting the 8080 was the most widely used, perhaps that was true in the early 1970s. That MPU was eclipsed by the 6502 and Z80 by the end of the 1970s.
38.69.12.6 (talk) 17:19, 8 November 2013 (UTC)[reply]


I understand that the basis for the claim that the 8080 is the most widely used MPU stems from the fact that it was used in the Voyager spacecraft, and due to the Voyager's current galactic locations the 8080 has a galaxy wide distribution unmatched (and unmatchable) by any other unit. Please advise if any of this information is incorrect. — Preceding unsigned comment added by Plerdsus (talkcontribs) 08:16, 12 August 2017 (UTC)[reply]

Whilst it's still the furthest flung evidence of human civilisation other than the very sparse cloud of radio waves that currently extend about 100 light years, "galactic location" is a bit rich. It's still a matter of debate as to whether Voyager is still within our solar system or not, hinging on whether you consider the heliopause to mark the boundary, or if the Oort Cloud is fully part of our local neighbourhood. Besides, even if it's truly gone interstellar, Sol is by far and away still its closest star. Not exactly a "galaxy-wide distribution". Still, I do like the idea that "most widely used" could be a very pedantically literal description ;-) 51.7.49.27 (talk) 19:18, 18 September 2018 (UTC)[reply]

When was production ended ?[edit]

The article states the 8080 "was released in April 1974", but makes no comment as to when production was ended. Presumably "official" Intel production ended before that of the second-source suppliers and the clones from the Warsaw Pact nations ?

Flag register[edit]

It seems odd that there is no mention of the flag register. Although it is not accessed with all the instructions that can be used with the other registers, including A, it is accessed with the A register in the PUSH and POP instructions, ie, PUSH AF and POP AF. as if AF was a pair, like BC, DE and HL.

That is odd, now that you mention it. I've added a section. --Wtshymanski (talk) 15:55, 7 February 2013 (UTC)[reply]

Interrupt flag[edit]

The official documentation does not mention an Interrupt flag at bit position 5. In fact it claims that the PUSH PSW (PUSH AF on Z80) writes 0 on the stack at this bit position. The EI/DI instructions set and reset the INTE flip-flop, not an interrupt flag. If such a flag does exist, it is likely very obscure, and a reference would be needed. — Preceding unsigned comment added by 31.46.199.69 (talk) 13:36, 10 September 2016 (UTC)[reply]

Checked datasheets ([3] and [4]) and corrected article. 82.193.140.165 (talk) 07:13, 17 October 2016 (UTC)[reply]
The same mistake seems to have propagated to the articles Intel 8085 (where this bit was undocumented) and FLAGS register, now corrected. 31.46.199.69 (talk) —Preceding undated comment added 14:38, 28 October 2016 (UTC)[reply]

why was the 8080 developed[edit]

The history of general purpose CPUs#1970s: Large Scale Integration article says "Also, to control a cruise missile, Intel developed a more-capable version of its 8008 microprocessor, the 8080." The list of Intel microprocessors#8080 article also mentions a cruise missile.

Assuming that is true (I wish we had references that either confirm or state some other goal for this chip), why did they need another chip? Was there something the 8008 or the D-17B couldn't do at all that required a new chip? Or was this re-design to give faster responses or use less power or in some other way make incremental improvements?

Why was the 8080 developed? --DavidCary (talk) 03:23, 23 August 2013 (UTC)[reply]

Because the 8008 was pretty crap, all up. It's about the most basic form of a technically 8-bit processor you can possibly make, shoehorned into the same general package as the 4004, running at a rather low maximum clock frequency (and even lower instruction rate) and containing such wonderful features as multiplexed address/data bus (a terrible misfeature intended simply as a material/cost cutting measure on the CPU side even though it then demands you use additional external logic to disentangle the buses, and one that Intel inexplicably came back to with the 8086/8088 and 80186/80188) and a non-power-of-two address range (meaning an artificially small address space and wasted bits in 8-bit-aligned registers and code).
They needed, and customers were demanding, something higher performance, with a greater memory space, with a more logical design that was easier to implement. I think there was also a change to the actual logic style used that meant it no longer needed multiple voltages to operate, just +5v. Which is why you see hardly anything that actually uses an 8008 (or 4004), but loads of things using an 8080 or its derivatives. It just wasn't quite powerful enough, and was just a little too annoying to build a system around. Whereas the 8080 was much simpler to integrate, and was *just* powerful enough for the sort of things that early micro users wanted to use their machines for.
At the same time they also came out with the 4040, which provided much the same improvements over the 4004 in computers/calculators/other machines that only needed 4 bits of data bandwidth (e.g. anything working on BCD values - including financial data and utility billing - would generally be happy with that word length, but the 4040 still boosted performance and made system building easier)... though that soon proved to be a dead-end market better served either by a single higher powered machine providing timeshared service to multiple users who each got about the same amount of computing power as before for lower hardware and power costs, or by the microcontroller line they brought out in the form of the 8048 et al at about the same time as the 8080 was upgraded into the 8085 (as a stopgap before the 8086). 51.7.49.27 (talk) 19:34, 18 September 2018 (UTC)[reply]
Followup: OK, some corrections, some additional emphasis. First up, the design of the thing now I'm better briefed on both seems very much like "expand the 4040 to 8 bits wide, simplify the memory bus to a flat architecture that includes RAM and ROM in the same space and allows code execution from both (economising on at least six pins, in return for using four extra for data/address) and rationalise some of the other functions". I wouldn't be surprised to find it kind of sat between the 4004 and 4040 in terms of development time. Second, bear in mind it was never made as a general purpose processor the same as the 8080 was - it was meant as a concentration of the various existing components of an intelligent terminal processor board, so there's a lot of seeming weirdness in there. Third, it was s-l-o-w, initially slower-clock-than-the-4-bits kind of slow, and only ran faster than them because of its wider data path and internal registers/buses - the first generation could only reach 500kHz, and the second 800kHz (vs the 8080's 2 to 3MHz, without data/address multiplexing). That really needs bearing in mind - an 8-bit CPU that only rates half a MHz and has to squeeze 14 bits of address and 8 bits of data all down the same 8-bit pipe is going to be extremely limited in application. Plus its instruction set was extremely limited - in fact, less comprehensive than the 4040 (60 instructions)... it only had 47, one more than the 4004. Your programs would have to be very RISC-like and wasteful of memory (and CPU/bus cycles), at a time when memory was very expensive stuff (and those cycles were in short supply also). And like the 4-bits, it was locked in to a rather inflexible cycle structure... I think something like 8 clock ticks per machine cycle, with one, two or three of those per instruction? The 8080 has rather greater flexibility in terms of how many ticks each instruction takes, and many of them complete in less than 8 clocks, and certainly the great majority in less than 16.
On the flipside... what I meant to say instead of "non-power-of-2 address space" was "address width that isn't a multiple or major fraction of the data word length". IE, 14 bits when the data word is 8 bits. There are many chips that have, e.g. a 16-bit word (but the bytes within those words are individually addressable if needed) and a 24-bit address space (ie, 3 bytes, or 1.5 words), which isn't too inconvenient to work with. It even makes it possible to cram an 8-bit instruction plus 24 bits of address into two words, if your architecture allows such things. Intel seem to be pretty much alone in doing crazy things like the 8008 scheme, or the 8086/8088's 20 bit address range (with hare-brained segmentation and address calculation logic) on chips with a 16-bit word length and 16- or 8-bit data bus (so transferring an address takes 2 or 3 transactions, and needs 1.5 to 2 words / 3 to 4 bytes to store in memory but you only get the benefit of 2-and-a-quarter bytes / 1-and-an-eighth words of actual memory space... and that's in the actual core design, it's not something that's been cut down from having a full 24 or 32 bits, sort of like the 6509 used in the Atari VCS (which was essentially a 6502 without interrupts and with the address bus cut down from 16 to 13 bits - but if you wanted it, the full 16-bit-address 6502 was still available).
And the power part of it was totally wrong. The 8080 needs +12, +5 and -5V to run. The 4-bits each only need -15V, and the 8008 needs +5V and -9V. Thing is, as we see in modern machines still, 12V and 5V, both positive and negative, are rather more "standard", common, and easy things to get out of AC-adaptor supplies. Negative 15 volts? Negative 9, AND positive 5 volts? Now we're into weird-ass territory. There were a lot of other devices you might fit into a computer that would draw 12V or 5V... rather less that would demand 9V (especially negative), much less -15V...
So, it has a wider and more flexible instruction set, a much higher maximum clock frequency (about 4x), better cycle efficiency (instructions need less clocks), even faster bus transfers (fully demultiplexed = easily 3x greater bandwidth or more, as you're no longer sending two address bytes for every data byte), a 4x wider memory address space that makes better use of the available memory and register space for storing addresses, more easily provisioned (even if not actually simpler) power requirements. It's a proper CPU that can run an interactive, real-time, video terminal based operating system with reasonable fluidity, and load programs of significant size in one go, shrinking what would have been high end minicomputer power 5-10 years earlier and mainframe power 15-20 years earlier into a single IC. Vs one that was barely good enough to operate a 7-segment-based cash register and was aimed more at replacing simple mechanical control systems or the discrete-logic based electronic designs that had started to replace those already. Why wouldn't you have designed a successor chip of that power level to upgrade from the 8008? To capture the market that wanted minicomputer-esque power as well as just programmable calculator power?
As for the 8085 itself... something of an oddball also, as much a step backwards as forwards, much like the 8086/8088. The instruction set was extended and made more efficient, but the chip itself went back to using a multiplexed bus in order to fit various additional control lines (such as a Z80-like NMI) within the same 40-pin DIP. 146.199.0.170 (talk) 20:13, 19 September 2018 (UTC)[reply]
If there is any doubt whether the 8080 improved on the 8008, check out the memory-move code examples for each processor. I did a bit of work to simplify the 8008 code but it is still convoluted. The assembly programmer expends a lot of work making up for the 8008's not-yet-invented LXI, INX/DCX, and STAX instructions.
Since you mentioned the 8085...unlike the triple-multiplexed 8008 bus, the multiplexed bus on the 8085 does not harm performance. The 8085 lower address appears early in the machine cycle and flows through the external open 74LS373 transparent latch with only a few nanoseconds delay. Even if the bus were not multiplexed, the address would be delayed a similar amount by the 74LS241 line driver needed for most systems. RastaKins (talk) 16:19, 4 February 2024 (UTC)[reply]
As with the predecessors, the 8080 was designed to be used for embedded systems. It just happens that it was powerful enough for actual general purpose computers. There were few tries for general purpose 8008 machines, though they never got popular. Well, partly it was the emphasis of MITS, to make it look like a more usual minicomputer. Otherwise, the funny power supplies of the earlier chips were usual for PMOS. PMOS is upside down, using a negative power supply and pull-up outputs. Using the +5/-9 allows easy interface with TTL logic. Gah4 (talk) 03:55, 5 February 2024 (UTC)[reply]

Intel names for registers[edit]

The article used the names Zilog gave to the 16-bit registers in their Zilog Z80 microprocessors (such as BC, DE, HL, AF), which was based on the Intel 8080. However, an article on an Intel processor should refer to the Intel names (such as B, D, H, PSW) and not the Zilog names, as logical as the later is. I modified the article to reflect this. If there is another convention I am not aware of, it could be changed back, but some citation should be given. — Preceding unsigned comment added by 31.46.199.69 (talk) 13:56, 10 September 2016 (UTC)[reply]

As well as I know, BC, DE, HL, AF are the right names for them, but the Intel assemblers use the short forms. Gah4 (talk) 03:57, 5 February 2024 (UTC)[reply]
The Intel documentation begs to differ. The 8080 data sheet consistently refers to the register pairs as B, D, H, and SP, for the relevant instructions LDAX, STAX, DAD, PUSH, and POP. --jpgordon𝄢𝄆𝄐𝄇 21:48, 5 February 2024 (UTC)[reply]
I will have to get out my documentation again. For other than assembler notation, do they call them that? Consider the SPHL, PCHL, and LHLD instructions. Gah4 (talk) 06:40, 6 February 2024 (UTC)[reply]
Page 4-3 of User's Manual indicates so to me. --jpgordon𝄢𝄆𝄐𝄇 06:47, 6 February 2024 (UTC)[reply]
The single letter designator (rp) appears to only be an assembly-language convention. Looking at the lower left corner of page 4-3 of the user's manual cited, Intel defines the bit-pattern (RP) register pairs as B·C, D·E, H·L, and SP (even though SP is not a register pair). So maybe the fussy answer might be use H·L instead of HL? RastaKins (talk) 17:55, 6 February 2024 (UTC)[reply]

Error in sample code?[edit]

Sample code for "memcpy" seems to re-enter the whole subroutine (the "loop" label is on top), including the initial (due) test on BC=0. I never programmed for the 8080, but I think repeating that test and returning if zero is basically useless after the first iteration. Ultimately, the "loop" label should bring the cpu back to the "ldax" instruction after the end-of-loop test (jnz) established that B (BC) hasn't (yet) reached zero. Correct me if I'm wrong, though. — Preceding unsigned comment added by Alessandro Ghignola (talkcontribs) 21:23, 26 January 2017 (UTC)[reply]

most important factor?[edit]

The article claims maybe the most important factor - full 16-bit address bus (versus the 8008 14-bit), enable it to access 64KB of memory I am not convinced. During the time that the 8080 was becoming popular, it would be rare to afford more than 16K. That was mostly true way into the Z80 days. Unlike the 8008, though, the 8080 stack can be anywhere in the address space. That makes it look more like a computer, and less like a microcontroller (as the 8008 was intended to be). As far as I know, Intel designed the 8080 as a microcontroller, too. Even more, though, the 6502, with its restricted stack addressing, does seem to make a better microcontroller, yet it caught on for many computers. A few high-end users might have used 64K with their 8080, but not enough to turn the crowd. Gah4 (talk) 14:47, 15 December 2017 (UTC)[reply]

That "most important factor" was obviously the writer's personal surmising, without any basis in fact. (It might have come from an uncited source; carelessly written and researched sources can be guilty of that as well.) I removed it, but I also rewrote the whole sentence, since it failed to explain the full significance of the first two factors it mentioned. The 8080 was a lot faster than the 8008 thanks to that 40-pin package, because that freed it from the 8008's restriction of having to dribble addresses and data out over a single narrow 8-pin-wide bus. Its use of NMOS logic gave it a further speed boost, quite apart from the issue of TTL compatibility. The physics of NMOS transistors makes them faster than PMOS ones of the same size, which is why major semiconductor makers switched to NMOS as soon as they could. The only reason they started with PMOS was because it was more difficult to devise a reliable NMOS fabrication process. --Colin Douglas Howell (talk) 23:05, 15 December 2017 (UTC)[reply]
Looks fine to me. Gah4 (talk) 01:16, 16 December 2017 (UTC)[reply]
Don't let yourself fall into the trap of thinking that the RAM is all the address space would be used for, or that they were specifically targeting the home micro market. Intel themselves gave examples in the 8080 documentation of using the wide address space to minimise component count by implementing memory-mapped IO (instead of having a whole separate IO bus and the switching/decoding hardware for such) when the full memory complement wasn't needed, or you wanted more than 256 addresses (which in their examples was as often used for simplistic one-of-eight selection anyway, instead of fully decoding the binary value). Essentially you limit yourself to 32kbytes of memory space and use A15 as the memory/IO switch (and, possibly, then use the address lines as a one-of-15 selector). Those 32kbytes being split between RAM and ROM ... already in the mid 70s there were systems that made use of at least 12kbyte of ROM (the Altair level 2 BASIC, for one thing; level 3 needed even more, but was intended for loading at least partially into RAM from an offline store), and it would be simplest to reserve 16kb for that, leaving only 16kb space left for your RAM. Which if you were flush enough, you could have easily upgraded your Altair to quite early in its lifespan, seeing as the company offered 4kb and 8kb memory upgrade boards pretty much from day one. Insert two of the latter, and, poof, you're out of expansion potential already.
Now, if we consider setting up a similar arrangement on an 8008... half your 16KB space goes to IO, OK, that leaves us with 8KB for RAM and ROM. An 8KB ROM will allow us to have a moderately sophisticated BASIC and/or boot-time monitor program, though not the best it could be, so we're already compromising. On top of which, 8KB is all we can install on the RAM front ... a single max-capacity Altair board. Rather disappointing. Even in the early 70s, it's quite apparent that the 8008 is a somewhat hamstrung chip that can't easily extend far beyond its intelligent-terminal roots. You might want to use it for the machine that you use to control your 8080 based computer, but you wouldn't want to have it as the heart of the computer itself.
And bear in mind they weren't even considering it as something for the home market at first. They had strictly professional plans for their latest baby, essentially going after the Minicomputer market with a vengeance. And Minis already routinely came with the equivalent of 64KB or more addressable, R/W main memory without even considering ROMs and such, which is why Intel also pointed out how it wouldn't be too great a job of work to implement bank-switching on the 8080 (e.g. by using the IO ports to trigger a mapper chip), especially in terms of keeping the lower 32kb locked in permanently (maybe split between 16KB system ROM, 16KB kernel RAM) and banking the upper 32kb, and extend its ultimate memory space quite considerably. For businesses, scientific institutes, financial or utility corporations, industrial users etc, 64KB was very much something they would have use for, and could afford to buy and fit - especially if both semiconductor memory, and fully integrated microprocessors made specifying such a system massively cheaper than the old minis that used (usually hand-woven) ferrite-core memory and multi-board processors built of dozens of discrete logic chips. The Altair/etc crowd came totally out of left field as far as Intel were concerned. But S100 machines of a very similar design quickly ended up spreading into all manner of places, where they offered similar or far better computing performance (especially in clusters) than the previous generation of big iron dinosaurs, but took up less space, cost less to buy, and consumed much less power. I've seen pictures of the boxes incorporated into video wall control banks in TV studios, dozens of them crammed into mounting racks that used to hold one or two minicomputers in stock market control rooms, etc. That's the sort of application Intel were going after in the first place, and those machines wouldn't just have had much more memory in them than a typical home hobbyist computer, but also other expensive specialist kit like high resolution colour graphics boards (thousands of dollars at the time) and high speed network links to tie them all together for processing the great volume of transaction data generated by a rapidly digitising market floor.
64KB address range instead of 16KB may not have meant much to the home hobbyist until well into the 8085/8086 era, but for professional users of 8080 based computers it would have been extremely important. That's why you don't see the 6800 or 6502 limited to less than a 16-bit address range either, except in their respective sharply cut-down, even-cheaper variants (like the 6507/6509) used in toys and games consoles where the limited memory range is completely irrelevant (...or at least was, until about halfway through the Atari 2600's life when games publishers really started feeling the pinch and began designing bank switching hardware into their cartridges, essentially adding back what MOS and Atari had blithely taken away thinking it wouldn't be needed). If you were making something like that, then, sure, the 8008, 4040 or 4004 were your go-to guys within the Intel range. But if you were making a high performance business machine? 8080.
It's why IBM ended up using 8080s, then 8085s and ultimately the 8088 in their own various attempts to break the professional-level home and small office market, in multiple different machines through the 70s and early 80s before finally getting lucky with the PC. That computer is model number "5150" ... there are various other lower numbers in the 51xx and 50xx range, and apart from one or two using custom in-house IBM processors, they're all based around Intel 808x's, with quite decent RAM and ROM complements, and aimed at various office-type and data processing tasks. And most of them are hideously expensive... definitely not hobbyist/"toy computer"/games machine grade. 146.199.0.170 (talk) 20:37, 19 September 2018 (UTC)[reply]

How the heck does I/O addressing work?[edit]

I've looked over the article, and the pinout, back and forth a couple of times, and I just can't figure it out. If the idea of the chip having a wholly discrete set of 256 (x2) 8-bit IO ports is to keep that traffic separate from local memory transactions ... how does it do that? There doesn't appear to be a completely separate IO bus akin the the modern QPI or PCIe (nowhere near enough pins anyway), but nor does there seem to be any way of asserting an external signal that would trigger the support logic into recognising the 8-bit address copied onto the two halves of the address bus specifically as an "I/O" one instead of a memory address and thus piping 8 bits of data to or from a peripheral instead of a matching RAM address.

If instead it's specifically that a duplicated 8-bit address on the 16-bit address bus somehow triggers it, well ... there's a whole long list of reasons I can think of for THAT being a really bad idea, that would be far worse than the typical alternative (such as used with the 8086) of corralling all the IO ports into the very top end of memory (IE, FF00 thru FFFF) and just having the trigger being "are all 8 MSBs set to 1?", and certainly no better than the alternative used e.g. on various 68000 machines where the CPU is supported by a memory manager chip that receives addresses and internally separates them into accesses to RAM, ROM, various I/O peripherals etc, which are themselves permanently attached to the data bus (and in a lot of cases the address bus as well) but only activated when the appropriate back-alley CS line running to them from the MMU goes high.

In all those cases you're still going to lose at least 256 bytes of memory space if not more (still, is that really such an issue when you've got another 63.75kb to play with?) and have to be careful not to accidentally overrun into IO space when writing/reading large blocks into/out of RAM (but in that case you'd end up "smashing the stack" first anyway, and on e.g. the 6800/6502 ultimately end up wrapping round into Zero Page and causing a monumental crash).

If you want to have the IO truly separate from memory, there needs to be some simple way to signal the support logic that the address being presented is either one of the 256 IO addresses, or one of the 65536 memory addresses (just a single "IO/MEM" line would suffice, with the active space being determined by whether the line is high or low; essentially, it's an "A16", for a memory architecture that has only 64.25kb available and reserves the top quarter kilobyte for memory-mapped IO), and as it stands there doesn't seem to be any method of doing so...?! 51.7.49.27 (talk) 19:49, 18 September 2018 (UTC)[reply]

Followup: OK, it's one of the oddball choices. You have to use an additional chip for it, essentially an external register/buffer, plus a small amount of discrete helper logic. The CPU, at the start of each bus cycle (which runs for several clock ticks) briefly asserts a certain "status" bit pattern on its data (address? I forget) lines, as well as driving the "status sample" line high. Which bit(s) is/are driven high out of the eight represent what action the processor intends to take during the coming bus cycle, including memory read, memory write, IO read, IO write, and a few others. The buffer samples that bitpattern and quickly makes a copy of it available on its own output lines, which are wired up, along with the more regular R/W Select and RW Enable lines (etc) to bus gating ICs and the like, such that when an address (plus data if relevant) are driven onto the appropriate lines, and the read or write select/enable lines strobed, the combination of CPU, buffer and discrete gates will end up shunting or pulling data to/from the right place, either a particular memory address or a particular IO port/device.

Quite why they didn't just add an "A16" line that would determine between Memory and IO is anyone's guess. I mean, they were presumably short on pins, which is why the status bits come off the data bus, but at the same time there's separate lines for Read Enable and Write Enable plus a couple other similar functions that would be considered completely redundant in a modern design (instead you just have a general data IO enable line, and another that chooses between Read or Write, as well as other binary options that rapidly make things much more efficient than using one-of-many). It could have been done without that extra confusion, which essentially just encourages you to go with memory-mapped IO for the sake of simplicity and sanity if you don't actually need a full 64.0KB. 146.199.0.170 (talk) 20:52, 19 September 2018 (UTC)[reply]

I believe Intel wanted another, smaller address space for I/O operations (but a bigger space than on the Intel 8008.) The IO status bit that you mentioned in the status word simplifies decoding the IO space. Without this status, if you wanted to carve out only the last 256 bytes of a 64K memory space for IO use, you would need an 8 input AND gate. Feature of marginal benefit: If the IO space is not explicitly decoded, IN and OUT instructions access memory. Because the IO port number is mirrored on the upper and lower address buses, memory locations 0000H, 0101H, 0202H...0FEFEH, 0FFFFH are accessible. RastaKins (talk) 00:56, 12 October 2021 (UTC)[reply]
Yes it is inconvenient to need to decode them separately. I believe some use memory mapped I/O with Intel processors. I suspect it was usual for minicomputers at the time to have separate I/O space, which influenced the design. Though, as well as I know, the 8080 wasn't intended as a minicomputer replacement, but instead an upgrade from the 8008. People did try to make microcomputers (that is, general purpose programmable boxes) out of the 8008, but they didn't get very popular. The 8080 really is where processors became powerful enough to do it. Gah4 (talk) 21:25, 19 September 2018 (UTC)[reply]

Example code format[edit]

The example code only works if the reader's screen is wide enough; otherwise, the comments wrap around, causing the machine code column to be out of synch with the asm code column. Anyone know how to fix? --jpgordon𝄢𝄆 𝄐𝄇 15:53, 3 March 2020 (UTC)[reply]

A register bitwidth[edit]

The article states that the a register is an 8 bit register, but the instruction set seems to contradict this. For example the instruction `ldax b` loads register pair `bc -> a`. That would require that a is 16 bits wide. Similarly, `stax b` does the opposite by loading `a -> bc` — Preceding unsigned comment added by Dakoolst (talkcontribs) 22:18, 12 April 2021 (UTC)[reply]

Don't know where you're getting your instruction set info, so I can't remark on its presentation, but you're misunderstanding how those instructions work. LDAX B does an 8-bit load of A from memory using the 16-bit memory address in BC (one notation for this is (BC) -> A); similarly, STAX B does an 8-bit store of A to memory using the 16-bit memory address in BC (A -> (BC)). Each of those instructions uses the 16-bit register pair as a pointer to an 8-bit memory location. --Colin Douglas Howell (talk) 15:47, 14 April 2021 (UTC)[reply]

PSW push order[edit]

An unregistered user switched the accumulator-flag order of the PSW in the register infobox citing this theory:

"The diagram for PSW has a status byte of flags, and the accumulator. When the PUSH PSW pushes these two bytes the accumulator pushes as the low byte and flags as high byte. You can compare these positions by doing mvi b,0x12 mvi c,34 and push bc, in this case the 34 is the lower byte in ram then 12. If you do push psw the lower byte of ram is the status and accumulator one byte higher. This suggests when treating status, accumulator as a 16bit pair that the accumulator is lo"

This observation is largely correct until the conclusion. The 8080 is little endian meaning the LSB is at the lower address and the MSB is higher. A simpler test than the one described above would be: XRA A, PUSH PSW, POP H. H will contain zero and L will show Z (bit 6) set. One source of confusion might be when compiling 8080 source to 8086 object, the 8086's LAHF and SAHF instructions put flags in the MSB. RastaKins (talk)

If the 8086's source code compatibility is to work right, then they have to agree. Otherwise, as well as I know both 8080 and 8086 use stacks that go down in memory. That is, push decreases the stack point address. I don't have either 8080 or 8086 books nearby. Gah4 (talk) 18:36, 29 March 2024 (UTC)[reply]
When converting the 8080 code to 8086, PUSH PSW is substituted with LAHF, PUSH AX. In most code, a PUSH PSW will be balanced with a POP PSW which would convert to a POP AX, SAHF. Even though A and flags were reversed on stack, this will work just fine as long as the stored PSW is not POPed into a general register. Only 8080 debuggers are going to care about this. RastaKins (talk) 22:31, 29 March 2024 (UTC)[reply]
I accidentally found my Z80 book when not looking for it. The explanation describes the HL, BC, DE, and AF registers. While they use AF in the example, they don't indicate which one is A and which F, but given that it is called AF, and not FA, and to be consistent with the other three, A would be the high byte, and F the low byte. Since they are pushed in decreasing memory order, to be little-endian, the high byte is pushed first, which it does say. I never tried the source code compatibility, though. Gah4 (talk) 00:38, 30 March 2024 (UTC)[reply]