Talk:Single-board microcontroller

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Not a microcontroller[edit]

Something intended to be used for control purposes must power up and run a a control program. By the time you add hardware to a KIM 1 to store an application program, you might as well build a board with the 6502 on it alredy and skip the expensive and useless mask ROMs that your application isn't going to use anyway. We are needlessly confusing the reader by lumping everything since ENIAC in with boards designed for control systems that had at least an EPROM socket on them. If you were building a controller in the 1970's, you might have had a KIM 1 kicking around the lab for education purposes, but your production hardware would not have used it, or you would have spent at least as much on expansion as the KIM cost in the first place. Yes, acuracy is a point of view, I'm sorry this distresses some editors, but I really can't understand why they want to lump in demo boards like the KIM 1 with microcontrollers. Plesae find me a reference saying a KIM 1 is a micrconcontroller board. --Wtshymanski (talk) 13:36, 7 June 2011 (UTC)[reply]

Imagine a heart-lung machine controlled by a stock KIM 1. Switch on the power, then find the cassette deck and cables, plug into the KIM, press buttons on the keyboard (was it a single button or some arcane jump instruction you had to enter from the keyboard? I can no longer recall.), press "PLAY" on the cassette deck, then press "RUN" after the tape loads...yeah, that's the very epitome of process control. Even in 1977 we could do better. --Wtshymanski (talk) 13:40, 7 June 2011 (UTC)[reply]
The rather nice, but wordy, introduction says

A single-board microcontroller is a microcontroller pre-built onto a single printed circuit board. This board provides all of the circuitry necessary for a useful control task: microprocessor, I/O circuits, clock generator, RAM, stored program memory and any support ICs necessary. The intention is that the board is immediately useful to an application developer, without needing to spend time and effort in developing the controller board.

    • A 6502 is a general purpose microprocessor, not a microcontroller.
    • Yes, a KIM-1 is a single board...at least as far as it goes. Add an EPROM board to it and it's not single-board any more.
    • The KIM-1 board does NOT provide all the circuitry for a useful control task, particularly, it lacks any user-programmable EPROM. No real-time clock or battery backed RAM either but that's probably not essential.
    • The board is not immediately useful for control applications because you have to find an EPROM board and adapt it to the KIM-1 - and there's no provision for mounting any daughter boards to the KIM-1, so you've got to figure that out as well.

I'd also point out that if you were mad enough to have bought a KIM-1 for control purposes you're not likely to use all that mask ROM on the board anyway, and the configuration of the board doesn't lend itself to compact integration into any product.

I bought a KIM-1 31 years ago for use in my thesis project. I had a lot of these difficulties to overcome just to develop my system software on it. The thing that I wanted to build wouldn't have used any KIM hardware at all, particularly not the nasty expensive mask ROM/IO chips. We would have *killed* for an Arduino in those days; would have been a palace to us. --Wtshymanski (talk) 21:02, 7 June 2011 (UTC)[reply]


Your mistake is to equate 'microcontroller' with 'to be used for control purposes'. Most of them sold, certainly the single-boards of this period (the actual subject here), certainly the low-cost boards, weren't sold in the expectation that they'd ever be used for this purpose to the full, but rather that they'd be used for education and training. Whilst your basic point that the KIM 1 wasn't suitable for controlling anything vital is correct enough, it's simply irrelevant - no-one was suggesting that it was, nor that 'microcontroller' requires it to be so.
As to the practical aspect of how to use a board like this as a controller without supplied NV memory, there are many ways round this. From memory, as all of these are ways I did it myself back then:
  • Keep the power on. It's probably not a live controller that must come up clean from a cold boot, just a class demonstration. Battery back-up if needed.
  • Replace the boot ROM chip with another, programmed for the task in hand. As the single-application ROM only needs to do one task, not to offer a full monitor, this is quite easy.
  • Add another EPROM to the bus. It's a 6502 and there's no need for DRAM refresh, so this is an easy piece of breadboarding.
  • Add an additional hardware board (quite possibly doing some A/D task or typical 'controller' function) and use this board for extra EPROM too. Many of these 3rd party add-on boards were already designed with just such a socket, often they could map their addresses so that they could take the boot ROM position (the original ROM was also pulled and moved to the add-on board).
  • Duplicate the entire controller to a modified home-brew. Copyright wasn't such an observed issue in those days, so it was pretty common for a class studying with the full-monitor and keyboard version of the controller to also make their own minimalised copies of it. The boot ROM was copied, as was the PCB layout. The hex keypad, display etc. might be ignored in favour of adding some I/O drivers or an EPROM socket. A similar practice continues to this day with the Arduino and its minimalised Boarduino variants.
As to the issue of nomenclature, then that was the marketing of the period. The smallest of single-board machines were sold on the basis of vastly inflated claims of computing power, not on how small they were. Despite this, the topic is the topic, not the name attached to it in the past.
As a Brit, I'm not particularly familiar with the Kim 1 / AIM 65 - we'd be more likely to have the early Acorns and the Microtan 65. It's listed here because there's already some wiki coverage of the Kim. Mostly though, it's a good example of the two related machines were one was a minimal controller and the other a rather more sophisticated device aiming at being a general purpose computer. Not a clear distinction in those days, as even the larger machine was still pretty small, but it is a good and clear example of the point being discussed in the article. Andy Dingley (talk) 11:53, 9 June 2011 (UTC)[reply]
Look at the introduction to the article. Look at how much hardware you've described as added on. You basically descirbe throwing out the store-bought KIM or AIM and building your own controller. That's not what micrcontroller boards are aimed at. Something like the Arduino board or BASIC Stamp can be put into a small-volume product without any engineering to figure how how to expand it, power it, and provide non-volatile program storage. That's the point of a single-board micrcontroller. How does it serve the reader to equate antique development boards with something that is purpose built to fill the spot on the diagram labelled "Controller goes here" ? If a KIM 1 is a microcontroller, is an Apple II? (At least they had a spare EPROM socket.) A Commodore 64? An IBM PC? (Lot of spare sockets!) Not really.
It doesn't serve the reader's interests to claim that any random board is a microcontroller board. You might as well claim an IBM 1800 was a microcontroller - sure it needed an air conditioned room and weight 2000+ lbs, but by golly you could hook it up to digital and analog process signals! (That took another fridge-sized box of gear).
I don't know how you see this as being a good and clear example as you've tagged anythin with a microprocessor in it as being a microcontroller - and they simply aren't. Flip through the ads in BYTE or its successors - you'll see lots of "single board computers" with hex keypads, and you'll see ads for microcontroller boards (usualy in the back in 1/8th column ads, and later in BYTE's run). Distinct products, distinct purposes. YOu might have used a KIM 1 in 1977 to control your automated chicken plucker because you didn't have many other choices - but you would have spent months in the lab building or integrating various expansion boards and card cages, all of which would have cost several times the price of the KIM. You would have had to figure out how to package it. And if you planned to sell more than a handful of automatic chicken pluckers in a year, you would have thrown all that engineering out and started over with a custom board because you didn't want to pay for the custom mask ROMs you weren't using, for the cassette and teletype interfaces you weren't going to use, for all those card-edge connectors that would get full of feathers, for a hex keypad and display that weren't mounted in the right spot to be acutually usable. Whereas, with a real microcontroller, you bolt it to the box, run the wires in, and load code and run it. Totally different product. There's no need to talk about development boards that weren't intended to be used in controller applications when we can actually talk about boards intended to be used for control.
"Don't lie to the readers" should be one of the principles of Wikipedia. --Wtshymanski (talk) 13:27, 9 June 2011 (UTC)[reply]
As you've clearly not bother bothered to read it, I present the first para of my last replay once again:
Your mistake is to equate 'microcontroller' with 'to be used for control purposes'. Most of them sold, certainly the single-boards of this period (the actual subject here), certainly the low-cost boards, weren't sold in the expectation that they'd ever be used for this purpose to the full, but rather that they'd be used for education and training. Whilst your basic point that the KIM 1 wasn't suitable for controlling anything vital is correct enough, it's simply irrelevant - no-one was suggesting that it was, nor that 'microcontroller' requires it to be so. Andy Dingley (talk) 14:06, 9 June 2011 (UTC)[reply]
I didn't believe it the first time I read it, either. Please provide citations showing a KIM-1 was sold or promoted as a "microcontroller". It wasn't sold as a controller in ANY period. What is a "microcontroller" for, if not to "control" something? Or is this more of the curdled thinking that we've gotten used to after so many years of clicking on "Start" to shutdown the computer? "Single-board microcontroller" is not synonymous with "single-baord computer", but is only a subset. If you wanted to control something today, would you be reaching for something like a development board or microprocessor trainer that requires tons of extra glue to make it usable, or would you say "Oh S*T, I haven't got time for this, where's the ad I saw in "Circuit Cellar" ? --Wtshymanski (talk) 15:16, 9 June 2011 (UTC)[reply]
So just what is your point? - you've changed it again.
Most of the single-boards sold for controller purposes in the '70s weren't fit for production-grade use as such. This is not a problem, they were training devices instead. Trainers cost a hundred, reliable controllers based on the same processor family might cost five times that.
The Kim-1 was not the world's greatest computer or even controller. Not even in its day. However it was a popular and well known example. It also came in the "little and large" formats (with the AIM 65) that are why it's listed here. If it wasn't any different from the AIM 65, and wasn't intended for a rather more restricted purpose, then why two products?
A "single board" isn't restricted to a literal "one board and no more". Many had a separate board for PSUs. Even today with the Arduino, one of its great virtues is its easy expansion through a 'Shield' to a two-board device. The defining point about a single board machine at this time is that it wasn't a card-cage or bus machine and that it could execute some minimal code with just the single board in use (i.e. processor, bus control and some of the memory were co-located). This doesn't preclude further expansion for some tasks. The Kim 1 meets this.
Adding additional 'glue' to a single-board is a common task. As the glue often varies according to the arcane needs of each application, it's impractical to make one board that supports everything you might want. In the early '80s, there was also the important aspect that I could make a PCB to house a couple of 14- or 16- pin ICS, but laying out multiple 28- and 40-pin devices would be either beyond me, or certainly a serious effort. It made a lot of sense to buy in a working board with a working bus on it, and then add my own board for simple (but particular) IO tasks.
Were they called microcontrollers at this time? Not usually. I'd be interested to know just when that phrase was coined. However the concept, and their use, was around long before that word became the standard way to refer to them.
What is a "microcontroller" for, if not to "control" something?
Mostly to teach students how to control something, probably using a better and more robust of a similar device with a processor from the same family.
"Single-board microcontroller" is not synonymous with "single-baord computer", but is only a subset.
That would be why there are two articles: one here at Single-board microcontroller and one at Single-board computer.
If you wanted to control something today,
This isn't an article about what I'd do today, it's mostly about the historical situation of the late '70s and '80s. These days I might use an Arduino (cheap embeddable component, and its own dev board, all for £20) or else recycle an old PC with squiggaflops of processing power that I don't need, but already have gathering dust. Neither are relevant to the early developments of 30 years ago. Andy Dingley (talk) 15:54, 9 June 2011 (UTC)[reply]
it's mostly about the historical situation of the late '70s and '80s.
So that means yes, there was lots of PL/M in use for this task. On the 8080 platform rather than the 6502-based Kim 1, but those big blue cubes of the Intel SDK machines were very often cross-compiling PL/M for 8085 embedded controllers.
Your rhetorical question "Are there really microcontroller development systems out there currently using PL/M? " in Wikipedia:Articles for deletion/Single-board microcontroller is an irrelevance because this article is mostly about the historical development of single board controllers, and "currently" has nothign to do with it. Andy Dingley (talk) 19:05, 9 June 2011 (UTC)[reply]

History of microcontroller boards[edit]

It wasn't at all clear to me that you were talking about history only; certainly the lead illustration is a pretty contemporary example of the microcontroller. I don't know that history alone is all the article needs; single-board microcontrollers are quite popular (Arduino, and so on) even now. --Wtshymanski (talk) 19:46, 9 June 2011 (UTC)[reply]

If there's going to be a history, then there should be a section titled "History". References are going to be a drag. When did controller-type boards (as opposed to demo boards) start being advertised? I have a few BYTE magazines from the late '70s and early '80s, I suspect the ads will start at some point.

Maybe the history of single-board microcontrollers is coterminous with the history of single-chip microprocessors. Here [1] is Intel in 1976 claiming the SBC80/10 as the "first" single-board computer, targeted at integration into embedded systems. There's a lot more chips on that board than you'd find on a modern system but it is definitely being marketed to OEMs as a bolt-down solution instead of roling their own 8080-based board. --Wtshymanski (talk) 14:17, 10 June 2011 (UTC)[reply]
The section you didn't bother to read is called "Origins".
Your Intel ref is interesting, because not much else of 1976 is so clearly on line. You might note that (just like the Kim 1) it doesn't use the term "controller" anywhere. It does however discuss PL/M.
This isn't the sort of device I was really talking about in this article, because it's too complex and too expensive. It's certainly within most definitions: it's single board and it can be used for control, but it's on the fringes. Even in 1976 there were cheaper ways of doing this - why spend for bus drivers that you'll never need for a single board installation? Can you even use this board on its own, without also buying a card cage that costs more than a Kim 1 does altogether? With its bus, cage, range of expansion boards and its UARTs, this is a device that is really aspiring to be a computer not merely a controller, and that's something that the article has identified as a separate piece of kit, right from the start. Andy Dingley (talk) 14:36, 10 June 2011 (UTC)[reply]
It would help me to climb down the ladder of abstraction. Could you point at some things you mean by "Single board microcontrollers", if possible, highlighting how they differ from what Intel is selling in this brochure? You don't need to use a card cage to use this board (but you can if you want). Unlike the KIM-1, you are paying at most for a few cheap bus driver chips that you may not use, compared to the expensive mask ROM/IO chips that contained the "Keyboard Input Monitor" program in the KIM. The SBC 80/10 has 4 sockets for user-programmable EPROMs, which you could burn with your Intellec development system. I haven't downloaded the user manual for this board yet, but I'm pretty confident it will show a stand-alone configuration where as soon as the power comes up, the chicken-plucking application program starts up with no user intervention. Intel is targeting this board at embedded systems devlopers who otherwise would have to use a minicomputer or minicomputer board or roll their own microprocessor controller. And Intel says in this pamphlet that if you're find yourself selling a lot of machines using this board, they will license the PCB artwork to you so you can build your own board and drop any features you're not using from the Intel SBC or add things you do need.
Google Books snippets show a few publications that talk about using this exact board as an embedded system. There's also a few snippets that use the words "single board microcontroller" but some of those are before 1975 when micrprocessors were uncommon. --Wtshymanski (talk) 15:06, 10 June 2011 (UTC)[reply]

"Aspirations"?[edit]

Beware the pathetic fallacy - I don't really think my KIM-1 languishes in a box in the basement dreaming of having a graphics display and a mouse. There must be a better way to phrase this. --Wtshymanski (talk) 15:25, 6 July 2011 (UTC)[reply]

Which bus is the internal bus?[edit]

You've got things like, say, an 8080, which has a full-up for-real external(off-chip) address and data bus. It's a Von Neumann architecture, if that explains anything, so you can put either program memory or data memory on the bus (in fact, you must, becuase there's no internal memory on-chip). Then you've got things like an 8051, which can also have an off-chip address and data bus, which might be used for data memory, or program memory (but there's also on-chip program memory), and it happens to be a Harvard architecture (though I'm sure with enough glue logic you could put programs and data into the same battery-backed static RAM if you want to.)

The way the computer is organized has nothing to do with the expansion bus. You can't tell from the card-edge connector that the chip on board is a Harvard architecture or just a very well behaved program on a Von Neumann architecture that has agreed not to try to write to its program memory. The thing that Joe Gas Pump Kiosk designer cares about is that if he spends an extra $4.95 per controller board, he doesn't have to worry about sweating his program down to 4K memory on-chip and can put it on an external ROM. --Wtshymanski (talk) 19:03, 21 October 2011 (UTC)[reply]

Don't lie to the readers[edit]

"Universally" Von Neumann? Pfui. An 8048 or an 8051 was not Von Neumann. A PIC is not Von Neumann. I wouldn't even want to guess which philosophy is more common in single-board microcontrollers unless I had a nice recent market report close at hand. --Wtshymanski (talk) 19:23, 21 October 2011 (UTC)[reply]

Of course not - that's the whole point. However these are the next generation, the single-chip controllers. The generation here, when board-level integration became available for a controller's budget, but before chip-level became technically practical, were the Z80s, the 6800s and the 6502s. These have to be a Von Neumann bus, because it's the only way to get their (single) bus out through the limited pins of a DIP package. A Harvard bus just isn't possible without either a board level processor that's already in multiple packages, a many-pin surface mount package, or else (as happened with the 8048 et al) a closer integration so that the buses and system RAM/ROM can remain wholly inside the chip package, and all that's needed externally is the I/O.
If this section is unclear, or you've failed to see the point, then by all means either fix it or tag it for someone else to fix it. This is an issue of English, or prose expression - fair enough. Yet what you're doing, and what you always do, is to take the line that "If Wtshymanski doesn't understand it, then it's Wrong" and you then delete the entire section. You are not the sole arbiter of technical accuracy. Andy Dingley (talk) 20:18, 21 October 2011 (UTC)[reply]
It's not prose expression, it's just not accurate and it's certianly got too many buzz words. When I laid out printed circuit boards for an 8048 processor, I could count 40 pins - and I knew it was a Harvard architecture. 8048s go back to (nearly) the "dawn of time", as far as microprocessors go. The limitations of DIP lead frames has very little to do with the design of microcontroller boards and nothing to do with "Harvard" or "Von Neumann" architecture. When the 4004 was built, they multiplexed *everything* to get it into a 16 pin DIP. The 68000 has "supervisor" and "user" modes available on the status bits, in principle there's no reason you couldn't have used the same chip pins for both program memory and data memory, with a status bit to decide which chips got the signals off the 40 pin DIP package. And today's single-board microcontrollers may use either Von Neumann or Harvard and that is the *least* important factor in selecting which board to buy.
If Wtshymanski doesn't understand it, it's certainly not clearly explained in the article he's just read; and very often it is wrong. I don't touch (or read) the maths articles, I can't tell when they are bullshitting me, but I can count to 21 without taking my socks and pants off. In that zone I will continue to be bold, as the encyclopedia exhorts us. --Wtshymanski (talk) 20:38, 21 October 2011 (UTC)[reply]
The 8048 certainly doesn't go back to the "dawn of time", it had to wait for chip scale integration to get to the stage where RAM & ROM could be on-chip (and so the bus didn't need to go off package). There's a whole generation, that was important for years, before this. Once again, everything has to be viewed through the "Wtshymanski goggles" - your generation had the 8048, so nothing different can have existed before this and the previous approaches can only have been doing just this, but in black & white. Andy Dingley (talk) 20:53, 21 October 2011 (UTC)[reply]
"the Z80s, the 6800s and the 6502s. These have to be a Von Neumann bus, because it's the only way to get their (single) bus out through the limited pins of a DIP package." Ah, no, not really. You just designate one pin, formerly an address pin, to denote I-space vs. D-space, in the manner of the PDP-11. Your program ROM responds to one set of addresses and your data RAM to another. Jeh (talk) 05:43, 22 October 2011 (UTC)[reply]
That said, I know of no single-board microcontrollers OR single-chip microcontrollers that implement a true Harvard architecture with its performance benefits. Yes, they commonly have separate flash RAM for code vs. volatile RAM for data... but that doesn't mean they can perform code and data access at the same time even though there appear to be "separate" buses serving different address ranges. Nor is it a given that a uP of this sort [i]can't[/i] execute code out of the volatile RAM, unless they implement separate I and D space and provide no way to turn that off. This therefore becomes more a quirk of the implementation's topology rather than a true distinction between Harvard and von Neumann architecture. Jeh (talk) 04:01, 22 October 2011 (UTC)[reply]
Tell us more of this PDP-11 based single-board microcontroller.
Yes, you can do all sorts of things that aren't listed here. However the scope of this article is (or at least used to be) fairly clearly defined as those single-board controllers of predominantly the 1980s where board-level worked and was affordable with chips like the Z80, but before the single-chip controllers appeared and reduced the need to use a whole board. An Arduino isn't quite the same thing as a Microtan. This is an encyclopedia after all, the main goal is to communicate, not to exhaustively define. No doubt there are many arcane multiplexed architectures that would be possible, but if these weren't used or (for a generalist article like this) weren't used enough to become part of the contemporary single-board microcontroller landscape, then including every possible option and variant loses clarity and focus at a damaging cost to article readability (look at the IEP engineering project articles right now). Maybe there was an LSI-11 microcontroller. Maybe HP built something out of their in-house processors as used in the HP85. If there really was something in that space, then it would be worth including. If it's just a theoretical possibility for an architecture though, it shouldn't be. The point here is that first generation chips (counting from the Z80) were Von Neumann, later single-chip generations like the AVR used started to use Harvard (or at least, a Modified Harvard). Andy Dingley (talk) 09:30, 22 October 2011 (UTC)[reply]
Hmm. As usual with our parts list articles, Intel MCS-48 has no dates for introduction of the chip but clicking on one of the links attached to the article gives us a user manual copyright 1978 - which is bragging about the new parts being faster than before. Zilog Z80 says it was sold starting July 1976. One and half years isn't very much time between releases, we can wait longer than that for a new version of Windows. Are you *sure* that the 8048 was so much later on than the 8085 and its ilk? --Wtshymanski (talk) 21:36, 22 October 2011 (UTC)[reply]
The distinction is less between the processor families and more about the amount of onboard RAM available. It was only about 5 years between the 8080 (a better start point than the Z80) and the 8048, but the first single-chip controllers had so little RAM available (only a couple of hundred bytes) that their use was extremely limited. Many controller tasks, even quite simple ones, needed more RAM than could be fitted into a controller, so had to stay with the bus-based boards. Andy Dingley (talk) 21:45, 22 October 2011 (UTC)[reply]
There were several different uCs built around the J11 (11/23 on a chip) and F11 (11/44, IIRC) chips. And if you argue that the KIM-1 was a "microcontroller" (even though lacking peripherals) then you have to admit DEC's own LSI-11/03 module qualifies too.
My real point though is that the situation is more complex than the article suggests. It isn't really Harvard unless the machine flatly can't execute instructions out of data space; this is only loosely associated with whether there are separate buses for I and D. I think the current wording too much links the concepts of "Harvard architecture" and "separate buses". As an aside, a set of signals that just runs from one element (the processor core) to one and only one other element (the program store ROM or flash RAM) shouldn't really be called a "bus" at all. Jeh (talk) 22:52, 22 October 2011 (UTC)[reply]
PDP-8s had a long career as controllers (Anyone else remember the "Aha! The old computer in the rumble seat" mad scientist advert that ran for years?). It's hardly surprising that DEC should have made a more dedicated controller version with the LSI 11 chipsets. If you can add anything here, please do. As to the KIM-1, then doesn't a lack of peripherals indicate a bias towards the controller role rather than computer?
I don't see your issue over Harvard architectures. Which ones are you thinking of, where the buses are separated but they aren't a true Harvard? Even for the modified Harvard, it's more the case that program space can be written to with a specialised write instruction, so as to permit in-circuit programming. The memory spaces are still separate and program memory never really becomes available as a general data space.
A bus with just one passenger doesn't stop being a bus and become a taxi. If the bus can only reach one peripheral then there's maybe a claim that it isn't a multidrop bus, but a multidrop bus that just happens to only have one peripheral connected is still a bus, just one with little occupancy. Andy Dingley (talk) 23:28, 22 October 2011 (UTC)[reply]
Once again we've got the pompous gas-baggery that Wikipedia is so justly famous for. What is the *significance* of the compsci buzzwords in this context? It's just pointless obfuscation that does nothing to explain the topic. --Wtshymanski (talk) 03:42, 28 October 2011 (UTC)[reply]
Please do not attack other editors. If you continue, you may be blocked from editing Wikipedia. --Guy Macon (talk) 06:14, 28 October 2011 (UTC)[reply]

Potential Edit War[edit]

Recently, Wtshymanski removed content about early single-board microcontrollers (see page history) and replaced it with an unsourced claim that some early single-board microcontrollers had a Harvard architecture. As justification for the removal, he cited the Basic Stamp as a counterexample.

I reverted Wtshymanski's edit with the comment "You cannot use attributes of a modern microcontroller-based board as justification for removing content about early single-board devices." Basic stamps are not early single-board microcontrollers. Arduinos are not early single-board microcontrollers. The portion that Wtshymanski deleted clearly says: "When single-chip microcontrollers became available later on, the bus no longer needed to be exposed outside the package and so the Harvard architecture of separate program and data buses (both internal to the chip) became popular." You can't use examples using single-chip microcontrollers to dispute a statement that specifically excludes single-chip microcontrollers.

There were a couple on intermediate edits by Jeh, which I retained. Of course, the edits he made changing the wording of Wtshymanski's unsourced claim were lost, but I kept all of Jeh's other edits intact.

Then Jeh reverted back to the unsourced information with the edit comment "I removed nothing! I moved a heading. otoh I see no justification for having two paragraphs that say almost exactly the same thing)"

Because this appears to be on the verge of becoming an edit war, I restored it to the last known good version, and invited Jeh to discuss it here. My edit comment was "Reverted to version as edited by Andy Dingley at 22:10, 22 October 2011. Please discuss on talk page before re-adding unsourced claims about Harvard architecture."

Please read Wikipedia:Edit warring and especially WP:3RR before reverting again --Guy Macon (talk) 05:31, 23 October 2011 (UTC)[reply]

Well this is certainly puzzling and embarrassing. When I looked at this diff I completely missed the point of your changes to the first graf under "Internal bus". I'm not going to edit or revert that graf again... but I would suggest that the word "universally" is just begging for trouble.
Regarding moving the heading - I will say again that the paragraph you added immediately before the L2 heading "external bus expansion" is almost identical to the paragraph immediately follows that heading. I moved the heading originally because I thought a description of an expansion connector belonged under "external bus expansion", not "internal bus". I guess you thought I'd deleted the paragraph completely (rather than moving the heading up a graf) so you "restored" it? I really see no point in having two paragraphs, one right after another, that say nearly the same thing. Maybe the heading should be made a level 3 and renamed "bus expansion", and the graf in question appear only after that heading? Jeh (talk) 02:25, 24 October 2011 (UTC)[reply]
On the other hand - the first graf under "internal bus" says that in later uCs "the bus no longer needed to be exposed". Both versions of the next paragraph then talk about an expansion connector, which will be kinda tough if the bus never gets out of the chip in the first place. Jeh (talk) 02:28, 24 October 2011 (UTC)[reply]
A clearer distinction (for Wyshymaski's benefit) might be to say that the Harvard bus _wasn't used on the board_. Von Neumann buses were, Harvard buses were _inside_ the 8051 and others, but these controllers didn't have a Harvard bus that came outside of the package and ran around the board. I presume this is what Andy Dingley meant?
The DEC processors sound interesting if anyone can turn something up. There was also the HP 9915 that was a cheaper, rack-mount version of the HP 85 with no screen or keyboard. This was a cased machine though and not a single board. Given the markets they were both selling into, I'd guess that any DEC product would have been similar and so neither are really "single-board microcontrollers". 213.249.204.90 (talk) 13:15, 24 October 2011 (UTC)[reply]
Both of the above comments by Jeh and 213.249.204.90 make a lot of sense, and I would have no problems at all if the article were edited to reflect the above, just as long as Wyshymaski's removal of content isn't put back in with a revert to one of his versions. Guy Macon (talk) 16:17, 24 October 2011 (UTC)[reply]
I missed several days worth of churning on this article. You see the trouble that the irrelevant buzzwords have caused? No-one but a CompSci student cares about "Von Neumann" or "Harvard" architecture. We're conflating chips with boards, modern with ancient, and confusing the whole point. Yes, you could add a memory board to some single-board microcontrollers; how the processor thought about that memory was its private business and of little concern to the user, who ROMed all his code anyway and didn't much mind about separate address spaces for instructions or data. (Of course, once you had added a memory board to a single-board controller, you lost one of the key advantages, viz, its single-boarded-ness. But sometimes you couldn't sweat the code down to 1K, 2K, etc. ) --Wtshymanski (talk) 04:32, 27 October 2011 (UTC)[reply]

Please explain why, other than "I don't like it", you are once again removing content without consensus to do so. Specifically, you have removed the content that explained that all early single-board microcontroller were Von Neumann architecture, and you have removed content that explains why (Harvard needs more pins than were available on the 40-pin DIPS that the microprocessors used.) Guy Macon (talk) 08:00, 27 October 2011 (UTC)[reply]

If you don't understand what a "Von Neumann" architecture is, you could try reading the linked article about it rather than deleting it? Isn't this how an encyclopedia is supposed to work? It covers _the stuff _you don't know_ so you can go learn about it. This sounds like an interesting point anyway:- one of the first times that a rather theoretical aspect of the buses becomes obvious at the practical product level. 213.249.204.90 (talk) 10:00, 28 October 2011 (UTC)[reply]
Although Wtshymanski does at times remove content that he doesn't understand, he often removes content that he understands perfectly. I have never been able to figure out his motivations, but he has a "thing" about deleting content, from paragraphs to entire articles. He will throw out alleged reasons why he wants to delete specific content, but they do not appear to be his real reasons, as evidenced by the fact that whenever someone refutes his argument he simply makes up a new one, never ever changing his mind based upon discussion. You are right about it being an interesting point: earlier computers made of vacuum tubes and transistors or even 7400-series logic had no restriction on how many pins a buss could have, and latter ones had high pin count packages. It is interesting and useful information to know that in the era of 40-pin DIP microprocessors Von Neumann architecture was always chosen - because of the 40-pin DIPs. -Guy Macon (talk) 11:00, 28 October 2011 (UTC)[reply]
How many pins on an 8748? 40? And it's which CompSci buzzword, now? At least it's Turing-complete, whatever *that* means. --Wtshymanski (talk) 04:00, 30 October 2011 (UTC)[reply]
49Typo 40 pins on a 8748. See [ http://www.westfloridacomponents.com/mm5/graphics/F06/D8748H.pdf ]. If you don't like "CompSci buzzwords" you might want to consider not reading Wikipedia pages about Computer Science. To learn what "Turing-complete" means, start here: Turing completeness. As for your repeated attempts to remove sourced content from Wikipedia and you continued refusal to work cooperatively with other editors, there is a strong consensus that you are on the wrong track. Rather than wasting all of our time undoing the damage you are doing, why don't you find another hobby? --Guy Macon (talk) 08:05, 30 October 2011 (UTC)[reply]
Did you look at figure 3 in that data sheet? How many pins do you see? Now I know you're playing with me. Harvard/Von Neumann has nothing to do with the number of pins on a DIP and certainly has nothing terribly significant to say about single-board microcontroller expansion busses. --Wtshymanski (talk) 18:32, 30 October 2011 (UTC)[reply]
Typo corrected. Harvard/Von Neumann has everything to do with the number of pins on a DIP. Just because you cannot understand an engineering concept, that's no reason to delete it. Harvard, with dual busses, requires twice as many pins as Von Neumann -- too many for a 40-pin DIP. This has been explained to you before, and is explained quite well in the material you tried to delete. Again I ask, why don't you find another hobbyhorse? Guy Macon (talk) 19:15, 30 October 2011 (UTC)[reply]
Tell me, Mr. Bones, which architecture was a 4004 and how many pins did it have? Why is an 8048 a 40 pin Harvard chip and an 8080 a 40 pin Von Neumann chip? If this was a discriminant, surely these two parts from the same company would have had either the same architecture or different pin counts. It's always been my position that Wikipedia shouldn't lie to the reader on purpose, and snowjobbing these terms in this context as if they make a pinch of difference to the pin count of the chip is actively misleading the reader. --Wtshymanski (talk) 19:33, 30 October 2011 (UTC)[reply]
The 8048 is not a Harvard architecture, not having separate program and data buses. Yes, I am aware that it is called a "Modified Harvard architecture" - a term that encompass everything from an 8048 (a 99% Von Neuman machine with one small difference) to the Atmel AVR (a 99% Harvard machine with one small difference). If you would like to expand the article to make this distinction more clear -- as opposed to simply deleting big chunks of it --- that would. IMO be an improvement. Guy Macon (talk) 20:11, 30 October 2011 (UTC)[reply]
And how many pins on a 4004? --Wtshymanski (talk) 22:09, 30 October 2011 (UTC)[reply]

Interesting that you used the edit comment "still irrelevant" when posting the above. I agree. The 4004 is totally irrelevant to a discussion about single board microcontrollers. 4004s were never used in single board microcontrollers. But even if they had been, it still would have been a Von Neuman machine. The 4004 had even fewer pins than later chips, and thus could not use a Harvard architecture. It had to do multiplexing tricks just to fit the single buss of the Von Neuman architecture on a 16-pin DIP. Got any other irrelevant questions? Perhaps you might want to quiz me on ENIAC's ring counters, which were _also_ not used on any single board microcontrollers? --Guy Macon (talk) 23:19, 30 October 2011 (UTC)[reply]

Here's the syllogism with all the intermediate steps laid out for those who don't find it as obvious as I do: It was asserted that the reason Harvard architecture was used was to save pins on a DIP package. Both Harvard and Von Neumann processors come in DIP packages with 40 pins. The 4004, which was *extremely* limited in pins owing to the state of the art of IC packages at the time, used a Von Neumann architecture. It is therefore incorrect to say that the number of pins on a DIP package is a determinant of the data space/program space architecture of a microprocessor, and quite irrelevant to the design of a single board microcontroller. --Wtshymanski (talk) 03:11, 31 October 2011 (UTC)[reply]

A valid chain of logic based upon a mistaken premise. The Harvard architecture in not, as you claim, a "data space/program space architecture." It is instead, as Harvard architecture claims, "a computer architecture with physically separate storage and signal pathways for instructions and data," Those two physically separate storage and signal pathways require twice as many pins. Your second error is a factual error. It is not true, as you claim, that "Both Harvard and Von Neumann processors come in DIP packages with 40 pins." Harvard architecture processors with 40 pins do not exist. As has been explained to you before, you are confusing Modified Harvard architecture with Harvard architecture. I have edited the article to make this distinction more clear. Guy Macon (talk) 07:16, 31 October 2011 (UTC)[reply]
Harvard devices do exist in small pin-count packages, the point for this article is that they don't expose these buses through the package pins. I see Modified Harvard as something of a red herring here - it doesn't change the fundamental issue, that of how many signal lines it takes to implement the bus. Andy Dingley (talk) 10:40, 31 October 2011 (UTC)[reply]
An example of a "real" Harvard architecture would be useful. --Wtshymanski (talk) 13:34, 31 October 2011 (UTC)[reply]
Please don't ask questions that can be answered by looking it up on Wikipedia. That's annoying. Harvard architecture gives several examples. Guy Macon (talk) 13:51, 31 October 2011 (UTC)[reply]
For the purposes of this article, an Atmel AVR illustrates the point perfectly well.
If you insist on not including Modified Harvard (which is pointless), then I think there are some 8749 variants, which were mask-programmed ROM and so didn't make the 8748's usual Modified distinction relevant from an original Harvard. Andy Dingley (talk) 13:39, 31 October 2011 (UTC)[reply]
It sounds like the AVR can read bytes out of program memory as data, which makes it even less Harvard-like than the 8048 and it's ilk. --Wtshymanski (talk) 13:45, 31 October 2011 (UTC)[reply]
It appears that you are having trouble accepting the fact that Harvard architecture is "a computer architecture with physically separate storage and signal pathways for instructions and data." and instead keep posting bogus definitions such as the one above. Repeating the claim that it has something to do with special instructions that can can read bytes out of program memory as data just shows your ignorance. It's physically separate storage and signal pathways for instructions and data. That's what it is. Deal with it. Guy Macon (talk) 13:51, 31 October 2011 (UTC)[reply]
Hold on, read the text before reacting to the editor name. The examples in the Harvard architecture article are: The Harvard Mark I ( not available in a DIP package), the Atmel AVR , the Microchip PIC, and some DSPs. A.D. says he knows of a Harvard architecture in IC form. It's got nothing to do with instructions that read program memory, as far as I can rememmber the 8048s, for example, can't read program memory because it's in a different address space than data memory. I'm not familar enough with the internals of the 8048 to know if it can do a program-memory-read concurrently with a data-memory read. None of this matters to the subject of this article, of course. --Wtshymanski (talk) 14:40, 31 October 2011 (UTC)[reply]
The definition of { Harvard , Modified Harvard } is based on the bus separation. Modified Harvard can also permit some data-like access to the program memory bus. This is valuable if you want to write a program loader, such as for in-circuit programming or even as the output stage of a compiler. How this is done is unimportant: it can make both physical address spaces appear in the same logical space, or it can provide a magic store instruction that writes to the program space. As the need to load programs is obviously important, and this modified architecture can be implemented without a performance drawback for executing programs, then I doubt if there's a "pure" Harvard machine left. Although, as already noted, some mask-programmed ROM controllers might be seen as having "un-modified" their Harvardness.
For the purpose here, we care about the physically separate buses, not the access overlaid on top of this. We also care about the bus when it's exposed outside the chip package. We don't care about the internal bus, we don't care about the program loader. Andy Dingley (talk) 15:00, 31 October 2011 (UTC)[reply]
Would physically separate busses include time-shared busses? It's not clear to me that any microcontroller can issue simultaneous program and data memory read cycles, which would be a capability of a real "Harvard" machine. --Wtshymanski (talk) 16:12, 31 October 2011 (UTC)[reply]
What do you mean by "time-shared buses"? If you mean simultaneous use of them, then that would sound likely (I have no idea of the 8048's actual internal timing, because the buses are usefully split like this in small devices just as much for simplicity as they are for speed).
If you mean multiplexed buses to get both out past a package pin limit, then that would sound pointless (a shared bus would be simpler) and it's also pure hypothesis - no such device exists, AFAIK. It would be possible to make one, but that's a long way from citing a real one. Andy Dingley (talk) 16:47, 31 October 2011 (UTC)[reply]
For a real-world example of a Harvard processor accessing program memory and data memory at the same time, See the Atmel ATmega128 datasheet, Page 14:
"Figure 6 (Parallel Instruction Fetches and Instruction Executions) shows the parallel instruction fetches and instruction executions enabled by the Harvard architecture and the fast-access Register file concept. This is the basic pipelining concept to obtain up to 1 MIPS per MHz with the corresponding unique results for functions per cost, functions per clocks, and functions per power-unit."
Note that the timing diagram shows one instruction fetch per clock cycle, with simultaneous execution of the previous instruction. Elsewhere in the data sheet you will see many examples of instructions that access RAM as they execute. --Guy Macon (talk) 01:31, 1 November 2011 (UTC)[reply]

KIM photo[edit]

I see my photo of my own personal KIM-1 is much loved in this article, but aside from the implicit praise of my photographic skills, sadly, the KIM is irrelevant to the topic of this article. True, it's a single board and it had a microprocessor on it...so does an Osborne 1, for that matter. Or an IBM XT. These three machines are equally suited to being called single-board microcontrollers, which is to say...not well at all. --Wtshymanski (talk) 15:01, 6 February 2012 (UTC)[reply]

I have an idea for a compromise. I think it is plausible that a lot of people who were aware of the Kim-1 (and related machines) may think of them as SBuC's even though they weren't marketed that way and were really unsuited for the job (as much described already). I could even see such people coming here and wondering why the Kim-1 isn't mentioned. Why don't we add a very short section called "related devices", with subsection "single-board microcomputer trainers", and there describe why they're not suitable as microcontrollers (instead of hiding that info here on the talk page)? Does anyone see a reason to not do that? Or maybe it could go in a "history" section, if refs could be found that the Kim-1 led to thinkgs like the modern SBuC. Too, I suppose it would be nice if "related devices" had at least one other subsection. Jeh (talk) 18:31, 6 February 2012 (UTC)[reply]
Sure. Can we work puppies into the article, too? After all, they might be related somehow. Seriously poor idea, in my opinion. Articles should be about their topics, not free-association collections of factoids. --Wtshymanski (talk)
Or: Could it be said (and referenced) that devices like the Kim-1 were part of what led to SBuCs, once onboard EEPROM, etc., was practical? Maybe the Kim-1 could go in a "History" section instead. Jeh (talk) 18:48, 6 February 2012 (UTC)[reply]
Not likely. Intel had a real single-board controller announced before the KIM 1. The types of things that controllers were supposed to do are different from the types of things a KIM 1 -style-product was supposed to do, which is why they are different products, and different articles. If a 2-year-old points at the highway and says "CA!" when a Silverado rolls by, we gently correct him; and even a 2-year-old is not likely to say "CA!" when an 18-wheeler rolls by. There's a reason that a KIM 1 has a cassette port and blinky lights and a hex keypad and a ROM monitor and a current loop interface and all that other guck that the Intel single-board micrcontroller doesn't have. --Wtshymanski (talk) 20:31, 6 February 2012 (UTC)[reply]
The point of the KIM-1 image is the reason why it's located in the section it's in. This is an example of a single-board using an EPROM monitor, with a hex keypad and a 7-segment display. Although the KIM-1 also illustrates the ability of some machines of this period to act as both controllers or computers, depending on the peripherals they were equipped with, that's not the reason the image is useful here. Andy Dingley (talk) 23:32, 6 February 2012 (UTC)[reply]
Well, yes, that's correct, if by "EPROM" you mean "Mask ROM". The relevance of this picture to this article continues to elude me. --Wtshymanski (talk) 23:54, 6 February 2012 (UTC)[reply]
Mask ROM / EPROM / PROM - all the same thing for this purpose, depending on the build volume.
The point is that a very important group of single-board microcontrollers provided their own development host system, yet did this at very little cost by using nothing more than the minimal controller, some extra ROM space for the monitor, a hex keypad and an LED calculator display. Programming was done by hand assembly on paper, then keying in as machine code with hex opcodes. These machines had no credible aspiration to be a "computer" in any real sense beyond this, unless expanded by a keyboard and video display, or else a terminal, and presumably some mass storage. These machines were crucially important in education (both vocational and general) because they allowed "controller development programming" to be done on a very low budget of $hundred rather than $thousand. Andy Dingley (talk) 00:05, 7 February 2012 (UTC)[reply]
Really? Mask ROM? "Depending on volume", you say. If I were making enough automatic bottle-cappers or heart-lung machines to amortize a set of mask ROMS for them, I surely would have enough financing to also build a proper circuit board, instead of buying a KIM and ignoring all the crud put on for irrelevant purposes (as far as controlling went).
Are you aware of any *products* that were actually *sold* that were developed by fondling the hexpad? As someone who spent far too much time dinking about with hand-assembled code, I found it to be a very tedious way to develop software (even by my low standards of the time). I can't imagine someone *paying* someone to develop only on a dinky single-board, for development of some commercial project; the payback on a *real* hosted development system would have been pretty high even by 1976 standards, and today no-one would bother since Grandpa's cast-off 486 is way more powerful than needed for running a cross assembler. Even our 1979-era 6800 lab had us punch cards and use the Amdahl mainframe for cross-assembling our lab projects - can you believe that *student* time was considered more valuable than computer time in that era?
So,no, the KIM picture, no matter how beautiful it is, is irrelevant to the topic of controllers. It's not a controller itself, and you had to buy 20,000 pieces to get new masks for the goofy I/O/RAM/ROM/timer chips that the KIM used. I find it doubtful that anyone even in the 1970's developed software with only the hex keypad, and incredible since about 1981. --00:35, 7 February 2012 (UTC)
As you've been told over and over, the biggest market for devices like this, and yes that includes "controllers" both described, sold and used as such, was in education, not for the development of live products. Where there was a class to teach, particularly for Z80s (which had a pleasantly orthogonal instruction set), then hand-assembly was indeed common. Even if a class had access to a big host system, that was almost always limited to one or two students, whereas (with a reasonable budget for the early 1980s) it was possible to give everyone a hex-keypad based machine.
The Sinclair MK14 was just such a machine. There was also the COSMAC ELF 1802-based machine that ranged in scope from a minimal single board controller (no UI for being anything more) to a card-cage with teleprinters and all the works - mostly though it was a single board with hex keypad and calculator display, and those appeared in college level education. The ELF (or maybe, the 1802) had some odd feature where it was very easy to add simple hardware to it to allow toggle switches or a keypad to be used to enter memory values directly, so it didn't even need the monitor program. Another UK device, 6502-based, was the Microtan 65 - again, a popular piece of hardware for tech college level vocational teaching of microprocessor programming. This had such rare luxuries as a tin case around the board and although it was still usually a key keypad for input it also had a rudimentary video output that gave multiple editing lines (wow!) on a TV screen.
None of these computers (Sinclair MK14, Cosmac ELF, Microtan 65 was sold for controlling machines, for becoming part of an embedded system; at best they were used to develop prototypes. The point being that there were and are boards sold for the purpose of controlling machinery or embedding into a system, that aren't trainers for learing about microprocessors, and that don't need fripperies such as hex keypads. --Wtshymanski (talk) 19:36, 7 February 2012 (UTC)[reply]
If you used a UK "Statesman" telephone (one of the handful of "standard supply" phones available from BT in those days) the factory that was building them had made use of a chip tester and controller programmed with just such a machine - I programmed it. A Z80 hex keypad development controller was used to develop the prototype, then the live code was burned onto an EPROM (using an Intel blue-cube SDK) that was a single rare beast elsewhere in the building. The eventual controller was a Z80 / 8255 custom PCB. This was a far from unusual way of working back then, especially as junior sprog labour was cheap.
Oh, and to address your red herring about masked ROMs, then the point is that the monitor could appear in enough volume to be masked, not the application. Andy Dingley (talk) 01:19, 7 February 2012 (UTC)[reply]
Intel...SDK (for a Zilog part?), eventual controller was... a custom PCB . --Wtshymanski (talk) 19:36, 7 February 2012 (UTC)[reply]
I find it interesting that in the process of pedantically correcting a minor error about EPROMS, Wtshymanski himself incorrectly claimed that the KIM-1 used Masked ROMs. It actually used MOS Technology RRIOTs that contained masked ROM, RAM, and I/O. See http://www.commodore.ca/manuals/kim1/kim1_users_guide.txt and http://archive.6502.org/books/kim1_user_manual.pdf --Guy Macon (talk) 10:21, 7 February 2012 (UTC)[reply]
It's very important to give the catalog number of the chip on the Wikipedia. Forgive my imprecision for saying that a chip that has a masked ROM on it is a masked ROM. --Wtshymanski (talk) 19:36, 7 February 2012 (UTC)[reply]

File:Atmega8 Development Board.jpg Nominated for speedy Deletion[edit]

An image used in this article, File:Atmega8 Development Board.jpg, has been nominated for speedy deletion for the following reason: All Wikipedia files with unknown copyright status

What should I do?

Don't panic; you should have time to contest the deletion (although please review deletion guidelines before doing so). The best way to contest this form of deletion is by posting on the image talk page.

  • If the image is non-free then you may need to provide a fair use rationale
  • If the image isn't freely licensed and there is no fair use rationale, then it cannot be uploaded or used.
  • If the image has already been deleted you may want to try Deletion Review

To take part in any discussion, or to review a more detailed deletion rationale please visit the relevant image page (File:Atmega8 Development Board.jpg)

This is Bot placed notification, another user has nominated/tagged the image --CommonsNotificationBot (talk) 08:45, 19 April 2012 (UTC)[reply]

Related discussion[edit]

There is a discussion at Talk:List of single-board computers#Scope - whether to include single-board microcontrollers that may be of interest to editors of this page. --Guy Macon (talk) 02:05, 13 November 2012 (UTC)[reply]