Talk:Computer multitasking

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

not imply unix was always multitasking[edit]

Removed: "UNIX, designed as a "single user" operating system in the 1970s, included most of the multitasking capabilities of its multi-user cousin MULTIX, and is today used for both purposes." Bad example because it is mostly untrue. Unix was very briefly a single user system, but got both multi-user and multi-tasking features very early, and at about the same time.

Perhaps we could find a better example of multitasking being recognized as a useful feature even in a single user OS, such as a PC OS? Paragraph 8 ("Although the original purpose behind the design...") would be a good place for it. --D
Excellent example as UNIX was designed to be the single user version of the already existing MULTIX - the name reflects this (uniX vs. multiX) - the fact that it later became a multiprogramming, even later a parallel processing OS in some versions, doesn't change the fact, that it originally was single user single task sys., the purpose of which was playing games on the DEC PDP 7. Later hype tries to deny this fact, same as later hype tries to postulate that Gates (at times: and Allen) designed DOS, which they in fact bought ready made from Seattle Computer. Let's kill myths. John.St (talk) 11:45, 7 June 2011 (UTC)[reply]

thnx —Preceding unsigned comment added by 122.169.66.171 (talk) 06:27, 3 April 2010 (UTC)[reply]

change wording not imply ordering[edit]

"To remedy this situation, most time-sharing systems quickly evolved a more advanced approach known as [preemptive multitasking]?. "

In context, this reads like preemptive multitasking developed after cooperative multitasking. Preemptive multitasking was tried in the early 1960s, but it turned out to be a lot harder to do reliably than anyone expected. Remember that back then, people routinely made I/O calls from applications and grabbing control away from somebody who had just issued a read to a paper tape reader often didn't work out all that well. I'm pretty sure that cooperative multitasking was a kludge to make multitasking work better by making sure that programs were in a suspendable state when they gave up control.

I'm not sure, however, whether a change is needed or will be an improvement. - DJK

multitasking vs multiprogramming vs time sharing[edit]

It seems those three notions are intermixed here. A clarification is needed.

  • Multitasking is a generic term allowing multiple tasks to be run, without regard to timing.
  • Multiprogramming is the possibility for multiple programs to be ready, and waiting for the processor to be free. The initial purpose of this was to allow a second program to run while the first one waited for some I/O to complete. Multiprogramming does not require any cooperation between programs, as there were no interactive users anyway. If a program needed the CPU for a long time, it could use it.
  • Time sharing is a method where all ready tasks are given a fair access to the CPU.
  • It should be noted that, multi-tasking (aka time-sharing) is a type of multi-programming. A multi-programming system is not a type of multi-tasking system however.
  • Multi-programming was put in place to keep the CPU busy, and not waiting on IO while it could be serving other processes.

The goal of multiprogramming is to minimize idle time for the processor, and that was the primary objective in the times the processor time was expensive.

reading from a tape[edit]

"(e.g. reading from a tape)" ?? Dated? In a modern computer would this be comparable to reading from a drive or waiting for a result from memory or an external device? - Omegatron 18:06, Oct 13, 2004 (UTC)

Suggestion: Give options: Reading from a disk drive, reading from a tape, reading from a terminal/keyboard. — Preceding unsigned comment added by 208.127.217.116 (talk) 23:09, 6 April 2017 (UTC)[reply]

Done. (I also added "receiving a message from a network". Guy Harris (talk) 00:51, 7 April 2017 (UTC)[reply]

Merge from co-operative multitasking[edit]

weak disagree : I'd suggest that rather than merging, just add a "main article" link, as it could be argued that co-operative multitasking is a valid and reasonable article in it's own right. Guinness 11:31, 15 December 2005 (UTC)[reply]

agree : There's not a substantial amount of content in the separate article. I'd suggest that it be merged, and only split it if it later gets big enough. Currently it's mostly just redundant, and the little that's not is split across two articles. QuiTeVexat 05:59, 20 December 2005 (UTC)[reply]

observation: Preemption (computing) appears to describe the same thing also. I'm not familiar with the procedures around here, so I leave it to someone else to tag it. -- G. Gearloose (?!) 01:04, 5 February 2006 (UTC)[reply]

I am suggesting merging preemption into deadlock. The problem is, preemptive multitasking and preemption of resources aren't exactly the same thing, although they are related. Also, I'm not sure a full article can be written on just preemption. Perhaps a split is a better choice? MagiMaster 09:04, 10 July 2006 (UTC)[reply]

Can you explain what you mean by "preemption of resources"? --Chris Purcell 17:00, 10 July 2006 (UTC)[reply]

Merge from Co-operative multitasking and Preemption (computing)[edit]

The discussion at Talk:Deadlock seems to have generated a consensus against merging. The discussion here seems to have fizzled out without it being really clear if there is a consensus. Since both of the other articles are hardly longer than the sections here and don't look like growing (in my opinion) I would propose going ahead and merging those two articles into this article. This would also help remove the Merge backlog. Any objections? --Boson 21:52, 19 October 2006 (UTC)[reply]

I/O bound / CPU bound[edit]

Surely the terms "I/O bound" and "CPU bound" aren't states of processes, but properties of a program (or a portion thereof)? (Adjectvies where a

An I/O bound process may be time-limited by the speed of its I/O, but when it cannot continue until I/O occurs then it is "blocked", whether or not it's in a busy wait. If it's waiting for a signal rather than polling, then I'd call that "waiting".

Similarly a CPU bound process is "running" (or "on the CPU"), or if not, then it's "ready".

Multithreading[edit]

"Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors."

There is no reference to the advantages and disadvantages of multiple cores or processers in a multitreading system, and whether multiple cores benefit or hinder other types of multitasking. This is rather important with the current surge in development of multicore processors. I don't have enough knowledge of or experience with multicore systems to edit the article - can anyone else contribute to this? --[smiler] 06:33, 16 February 2006 (UTC)[reply]

I also wonder exactly what this means, I added the citation needed template 222.225.196.13 03:06, 1 August 2007 (UTC)[reply]

how could a system be designed to allow a choice of operating systems to boot from[edit]

how could a system be designed to allow a choice of operating systems to boot from?

Typical multi-booting systems have a second-stage boot loader. On a PC, the second-stage boot loader sits in the MBR, and gets loaded by the BIOS.
Or you can replace the boot device -- easy if that is removable. In older computers (pre-PC) that was often true. Paul Koning (talk) 16:09, 4 November 2015 (UTC)[reply]

Please specify[edit]

"One use for interrupts is to allow a simpler processor to simulate the dedicated I/O processors that it does not have."

I don't understand. A processor simpler than what (central or I/O?) is using interrupts to simulate the I/O processors?! —Preceding unsigned comment added by Doru001 (talkcontribs) 09:15, 16 April 2008 (UTC)[reply]

I know what this sentence was supposed to mean, but it is completely off-topic here. See channel I/O. Deleted it.  Done --Kubanczyk (talk) 20:47, 2 July 2008 (UTC)[reply]

Amiga[edit]

I added the note on the Amiga, it seems the guys who really care about this article are not aware that there was a computer called the Amiga (most peopel think of the Amiga as a video game machine, but to those who actually used them for serious media work know better)..

There are books on the low-level programming of the system, and reading them you will realize it was a truly pre-emptive multi-tasker, and it was mentioned a lot of times by the userbase themselves, it had task priorities, it's where the eurodemos that exploit the features of the Amiga got their start.

The reason it crashed a lot and gave GURU errors was that it was expected of the developers to write their programs by the rules that Commodore laid out, and if anyone failed to manage their resources properly, the machine would crash (the result of the CPU executing beyond the code block or writing data into a code block).

This could be seen as a failure from one perspective, but from another code-purist point of view, it forced coders to write correct code that behaved and managed it's resources. If a language like java (but not as bureaucratic) had been used on the Amiga, it would not have crashed because of the elimination of pointers. It was because of the improper use of pointers and sloppiness of that coders that wrote for it, that it crashed.

To use an MPU, is to make the statement that humans are not perfect, so we should not expect to be able to write correct code either. Even when Windows 98 and Mac had MPU's, they would crash.. And computers still crash.. So in hind-sight, was the Amiga so wrong for the way it was designed, or was it more correct than any computer that has been developed since? BTW, Deluxe Paint 3 for the Amiga 1000 was around 300KB, and would run within 512KB with the Workbench and enough memory to paint pictures. When Apple attempted to make a competitor for the Amiga, they came up with the Apple IIgs (15 voice ensoniq chip with 64KB of memory and a, what was it? 1Mhz processor). And still people worship Apple like they make really good technology, look at history guyz, don't rewrite it..

97.123.57.99 (talk) 07:59, 11 August 2009 (UTC)[reply]

I removed the bit about the Amiga because it wasn't neutral, it referenced no sources, and was obvious boosterism by a fan of the Amiga. Not to disparage the Amiga in any way, but the discussions of when consumer operating systems offered preemptive multitasking is somewhat tangential to the topic, and is likely mentioned in the specific articles about those operating systems. Since Windows and MacOS are still widely used it might give some context for people who remember those days, so I left those. Even those aren't vital, really. Better to drop even the Windows and MacOS mentions rather than have a sentence for every introduction of a consumer product which in some way advanced multitasking. If people would like to do some research, perhaps they can find some instances of products which illustrate real-time multitasking systems, which currently has no such examples.

--Charles 2/20/2011 —Preceding unsigned comment added by 74.96.54.208 (talk) 22:05, 20 February 2011 (UTC)[reply]

Alternating Multitasking[edit]

I was thinking should this topic contain topics on Alternating Multitasking, the term is not commonly used, but it describes a SIMD array processors, in which the processor tried do schedule 2 task at once by alternating sequence. So if there is 2 task, Work A and Work B. The sequences would be A, B, A, B, where task is A work, stop, work, stop. The concept is based on, overlapping the time between Task A stop and Time B work.

I know OCZ Enhanced Bandwidth and IBM Power Architecture threading uses this mechanism. Except IBM, does it a bit differently, because the instruction are very long that involves, prepare the query, fetch...etc. The time between preparition of thread is overlapped by implanting a XDR RAM. It is not really a true alternation method, but it achieve by the following

  • Since it is slower at preparing threads (in Data cache), it implants a XDR RAM so it can (use that time to prepare for instruction cache and decoding) therefore the total wasted (or buffer) time is not wasted.

The only problem is that the speed that XDR RAM is not enough so it can only reach 1 petaFLOPS (or roughly tenfold) instead of at a more scalable multiple.

Usually when components are doing Alternation, it is best when one rate is faster than the order. --Ramu50 (talk) 01:45, 7 October 2008 (UTC)[reply]

Nice Article[edit]

Just want to thank everyone involved in writing this article. It's well-organized and concise, so I quickly got a good overview of the topic. Reads pretty well too, with consistent voice throughout. --Armchair info guy (talk) 03:00, 1 March 2009 (UTC)[reply]

  • Hear, hear! Came here to say thank you for the great article, and someone already did it before! I wish all articles were such well written and to the point. Great article! Thank you! WillNess (talk) 19:35, 12 July 2011 (UTC)[reply]

Preemption overhead vs cooperative?[edit]

I expect that there is a certain amount of processing overhead that comes from designing a preemptive OS, due to needing to allow for a management layer that can schedule, prioritize, or idle other processes.

But I don't know if this is really true or not. It seems difficult to do an apples-to-apples performance comparison between a preemptive and a cooperative OS to know if the cooperative can use CPU cycles more efficiently by not having the management layer.

DMahalko (talk) 03:58, 3 June 2009 (UTC)[reply]

Theoretically I think there can be a very significant efficiency advantage to a cooperatively scheduled system, because of the avoidance of locking.

In a preemptively scheduled system, whenever a thread uses a shared resource (that other threads could use), it must use some mechanism of exclusion to ensure that use of the resource is not chaotically split between threads. The mechanism usually used is locking, and this typically entails significant overhead.

In theory, in a cooperatively scheduled system, threads can avoid this overhead by yielding only at times in between using shared resources. However, I say 'in theory' because in reality it is quite often the case that this is impractical. Either there would be (potentially) an unacceptably long wait between yields, or there would be a significant overhead associated with dividing up uses of the shared resource in a way that permits more frequent yields.

In the end, it isn't clear that the overhead of locking is really worse.

debater (talk) 14:28, 9 September 2009 (UTC)[reply]

Real time[edit]

I think the current 'Real time' section is a little too brief and vague:

Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system was coupled with process prioritization to ensure that key activities were given a greater share of available process time.

I propose to rewrite it as follows:

The preemptive multitasking model provides a convenient programming model for dealing with high-speed I/O.

For a long time computers have used an interrupt handling model to enable the CPU to deal with high-speed I/O events efficiently. The CPU can be doing useful work without having to poll waiting. When an event occurs, the interrupt mechanism directs the CPU to execute a interrupt handler to deal with the event. When the interrupt handler has finished, normal execution can proceed from where it was interrupted.

However, the interrupt handler is itself limited. It must never take too long to complete, because all the time it is executing it cannot itself be interrupted. To prevent the possibility of two or more simultaneous (or nearly simultaneous) events being handled too slowly, interrupt handlers must finish very quickly.

The only solution to this limitation is to have high-speed I/O events dealt with in two stages. The first stage is carried out by the interrupt handler, which performs the very minimum actions necessary to deal with the event expediently. This includes sending some signal to the second stage.

The second stage of handling is carried out by normal (interruptible) piece of software, which generally does not have any of the limitations of an interrupt handler.

The preemptive multitasking model provides a ready-made programming model for this two-stage approach, because the second stage can be a thread which blocks on a signal (e.g. a lock) controlled by the first stage (the interrupt handler).

The mechanisms for prioritization of threads can be used to control the execution of the second stages. These mechanisms may be quite sophisticated, and may be chosen for suitability for real-time applications. This can be a factor which makes a preemptive multitasking model especially convenient for some real-time applications.

Note, however, that for many ‘hard’ real-time applications—where responsivity needs to be tightly confined and guaranteed—preemptive multitasking is not practicable. This is currently an an area of advanced research.

I've put it here for people to review first, rather than just shoving it in.

I'm currently looking to find some references. If anyone can suggest any, I'd be grateful.

Does this article explain what a thread is? It should do!

debater (talk) 14:15, 9 September 2009 (UTC)[reply]

Proposed merge of Multi-tasking with Multiprogramming[edit]

. Having established Multiprogramming as an independent topic from Multi-tasking one year ago, I would again oppose this suggestion.

From my practical experience in the field, the two concepts and definitions should not be confused; Neither is necessarily a subset of the other. (Multi-tasking came after multiprogramming, thus many systems which have multi-tasking do so to facilitate multiprogramming —but not necessarily all!). "Multiprogramming" currently has a section which draws distinctions between the two.

For that reason (to reduce redundancy), I might consider an argument to include a complete treatment of Multi-tasking under the heading of Multiprogramming , along with the various associated concepts: Cooperative vs. Preemptive vs. Time-slice Multi-tasking, Real time, Multi-threading, etc.

Many of these terms were coined by competing technology companies— IBM, Univac, Burroughs, General Electric, et al, and their definitions vary somewhat depending on whose manuals and textbooks one is reading.

They are all parallel but distinct and different technologies which were developed at different times and places by different parties to solve different problems.

If you want to lump them all together under one topic, I suggest "Computer System Resource Allocation" —and include other definitions such as dedicated resources, virtual systems, memory management, paging, multiprocessing, redundant arrays, multiplexing, spooling, background processing, etc.

 Sources said (talk) 06:49, 15 August 2010 (UTC)[reply]

Was cooperative multitasking first?[edit]

The present article contains the sentence: Early multitasking systems used applications that voluntarily ceded time to one another. This is true for early home computers, but is it true for computers in general? This sentence was added to the article before its (recorded) history starts so it's a little difficult for me to track down what its author meant to say. Rp (talk) 14:19, 8 August 2011 (UTC)[reply]

In the beginning we inserted 'wait' instructions in the code - typically, but not limited to - every time a peripheral unit (card reader, printer, ...) was accessed in order to voluntarily give up the time slice. — Preceding unsigned comment added by John.St (talkcontribs) 10:32, 11 August 2011 (UTC)[reply]
"In the beginning" i.e. the 1960-70'es. John.St (talk) 17:49, 13 August 2011 (UTC)[reply]

Multitask ( Cooking)[edit]

Hi, I'm Ved Kumar What does Multitask mean? For example , ( My Mum is multitasking to cook for me) Does that make any scence? — Preceding unsigned comment added by 86.176.49.109 (talk) 17:33, 25 April 2013 (UTC)[reply]

Multitasking and multithreading[edit]

Just wanted to add some historical facts about the terminology in use today. In the mid 1960s, companies like Digital Equipment, Data General and Interdata (to name a few), started producing what was then called mini-computers. Many had somewhat primitive operating systems, with only two user processes running, which were called the "foreground" and the "background" processes. When an application initiated an I/O (i.e., disk read), the process would be blocked until the I/O would complete. To attain simultaneity, the operating systems supported an API which was called "multi-tasking". This API allowed programmers to write code that would run as separate tasks into one of the two processes, much as today's "threads". (They were light weight processes, with fast context switching, and they share memory and even code if so desired). With the advent of the IBM Personal Computer, which was started out with a single task operating system (DOS), the need to be able to perform more than one task at a time became obvious, and as it was implemented, the term "multi-tasking" was "usurped" from the mini-computer world by those that had never heard of it. When the Unix world became aware of the advantages of having light weight processes, the "wheel" was renamed (not re-invented), and we got what is today known as Posix threads (again, nothing new, a renaming of the concept used by mini-computers during the mid 1960s through mid 1970s). I personally worked with both the old mini-computer multitasking systems and Posix threads, so I am very familiar with them. — Preceding unsigned comment added by Bobcasas1 (talkcontribs) 17:26, 15 July 2013 (UTC)[reply]

Change the picture?[edit]

Wikipedia is not a soapbox — Preceding unsigned comment added by 70.59.27.125 (talk) 18:50, 29 October 2013 (UTC)[reply]

So what picture would you suggest in place of the one currently in the article? GB fan 18:56, 29 October 2013 (UTC)[reply]
I currently don't have a picture, but I'd be willing to find/make a new one. I just think the choice of programs/things running seems like they are trying to push some sort of viewpoint. If you're trying to have the same "purpose" without the possibly un-intentional propaganda, it'd just need to be a screen with multiple programs running. 70.59.27.125 (talk) 19:12, 29 October 2013 (UTC)[reply]
Now have a picture, replaced picture with a somewhat simular picture(but without political "taint". (Why this artical has such a political image for a un-political topic is a bit beyond me). Benjamintf1 (talk)

Memory protection[edit]

Are the 'citation needed' tags really required on the 'Memory protection' section? There must be a point where generic background knowledge makes explicit citations irrelevant.

For example "When multiple programs are present in memory, an ill-behaved program may (inadvertently or deliberately) overwrite memory belonging to another program, or even to the operating system itself". Does this really require a citation? Mtpaley (talk) 00:12, 13 November 2013 (UTC)[reply]

Proper references are required for any claim made in any Wikipedia article. This is not 'generic background knowledge' as you claim but a specific phenomenon (if indeed it exists). The only time references are not required are for very obvious statements such as, "The sky is blue" (See WP:BLUE), but even then there must exist generic references that state that the sky is blue. Though having said, I note that the Wikipedia article on Sky has no less than four references to support the claim that the sky is blue. 86.140.145.17 (talk) 08:58, 10 March 2014 (UTC)[reply]

Factual problems...[edit]

Many of the claims in this article are incorrect. Here is but one example:

"Despite the difficulty of designing and implementing cooperatively multitasked systems, time-constrained, real-time embedded systems (such as spacecraft) are often implemented using this paradigm. This allows highly reliable, deterministic control of complex real time sequences, for instance, the firing of thrusters for deep space course corrections."

This is not true. The entire reason servers and all modern operating systems, desktop, and embedded are exclusively preemptive is because cooperative multitasking was inherently *unreliable*. The only reason to use cooperative multitasking is ease of implementation, as it required no special hardware and only a very simple scheduler.

Today, almost all spacecraft use preemptive multitasking - along with almost everything else. Almost every operating system, all modern versions of Windows, linux variants, BSD variants, Mac, Android all use preemptive multitasking including VxWorks - the OS that runs spacecraft including the Curiosity rover, Deep Impact, James Webb Space Telescope, Mars Pathfinder, Spirit and Opportunity, SpaceX Dragon, Deep Impact, Phoenix Mars Lander... and so on. The ISS runs Debian, itself a preemptive multitasking OS. Safety critical systems such as avionics run hardware level preemptive schedulers to guarantee CPU time, such as the case with the F-35 avionics running Green Hills software's Integrity OS - specifically chosen because of it's rigorous preemptive multitasking and memory protections.

I can find no reference to any spacecraft built in the last 30 years that relies on cooperative multitasking and for obvious reason; A single erroneous subroutine could hang the entire system or put lives in jeopardy.

Vcfahrenbruck (talk) 12:23, 14 March 2014 (UTC)[reply]

Hello there! Just as a note, you're more than welcome to edit and improve this article. Just provide a few references, and off you go. :) — Dsimic (talk | contribs) 22:08, 15 March 2014 (UTC)[reply]
If something lacks references feel free to add a box above it, write [Citation Needed], remove it, or improve it (as long as it follows WP:SYNTH), so go ahead.
Sincerely, --Namlong618 (talk) 08:12, 27 February 2015 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Computer multitasking. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers. —cyberbot IITalk to my owner:Online 21:05, 25 August 2015 (UTC)[reply]

Looking good. It's a reference, though. — Dsimic (talk | contribs) 05:30, 26 August 2015 (UTC)[reply]

Leo III not first[edit]

The section "multiprogramming" says that LEO III was the first to do multiprogramming. This is clearly not correct, since LEO III shipped in 1961 according to its article, and the Electrologica EL-X1 demonstrated multiprogramming in 1958. See "The X1 computer" by B.J. Loopstra, The Computer Journal (1959) 2 (1): 39-43 [1] Paul Koning (talk) 16:14, 4 November 2015 (UTC)[reply]

This statement is entirely false indeed, we're publishing inaccurate information. I'll revise the paragraph to include details about the Bull Gamma 60 and X1 computers while removing the claim that the Leo III was the first. Damien.b (talk) 12:12, 10 January 2024 (UTC)[reply]

GPUs[edit]

With the advent of GPUs, complex video data could be displayed rapidly provided that's the only task assigned to it. In recent years, the use of GPUs as a powerful math coprocessor has become common through the use of languages sucn as OpenCL or CUDA. The big shortcoming in that concept is that GPUs lack a functional operating system and are therefore not amenable to effective multitasking. A topic on "Stream Computing" should mention these facts, together with a clear statement of what needs to happen in the future. — Preceding unsigned comment added by 208.127.217.116 (talk) 23:06, 6 April 2017 (UTC)[reply]

Main image doesn't represent multitasking very well[edit]

It's like someone asking what a symphony sounds and you listing off all the instruments, giving a brief mechanical description of each and neglecting to mention the composer, the conductor, the time signature, and how the instruments take on different roles, compliment each other, harmonize, etc.

There are baked user interfaces of a variety of programs looking pretty and not doing much else. You couldn't even comfortably human multitask https://en.m.wikipedia.org/wiki/Human_multitasking the way in which the windows are configured

— Preceding unsigned comment added by 2601:643:103:F170:ECDC:349F:26BD:4841 (talk) 21:11, 19 August 2017 (UTC)[reply] 

Dear poor author[edit]

Ever hear of citations? — Preceding unsigned comment added by 2601:46:200:5671:5C61:5225:2DC5:A22 (talk) 07:45, 28 July 2018 (UTC)[reply]