Wikipedia:Reference desk/Archives/Computing/2012 December 31

From Wikipedia, the free encyclopedia
Computing desk
< December 30 << Nov | December | Jan >> January 1 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 31[edit]

Memory advantage of a hypothetical computer whose hardware used -1, 0, and 1[edit]

If there were a computer that used three values per 'bit' instead of binary two at the hardware level, would there be a simple third improvement as far as how many different values could be stored in a set number of memory locations of the same word size, or is it something different? 67.163.109.173 (talk) 02:32, 31 December 2012 (UTC)[reply]

That would represent a 50% increase in the amount of data which could be stored per bit. However, existing programs wouldn't be able to fully take advantage of this. For example, characters are commonly encoded as either 7 bits (128 values) or 8 bits (256 values). Even if you were to remap this to "tri-bits", you would have to settle for either 81, 243, or 729 possible values for a character, using 4, 5, or 6 tri-bits. Conversely, there might be some operations where tri-bits are more efficient, like in comparing two values. You could use a single tri-bit to represent less than, equal to, or greater than, while you would need at least 2 regular bits to represent these 3 possibilities. StuRat (talk) 02:39, 31 December 2012 (UTC)[reply]
The increase in capacity is actually log2 3 ≈ 1.58, or 58%, assuming the trits occupy the same amount of space as the bits. Most consumer solid state drives actually use more than two states per cell to increase density—see multi-level cell -- BenRG (talk) 04:35, 1 January 2013 (UTC)[reply]
OK, I don't understand that at all. 3 states is 50% more than 2 states, so how do you get 58% ? StuRat (talk) 06:16, 1 January 2013 (UTC)[reply]
With k bits you can represent 2k different states. Turning that around, to represent n different states you need log2 n bits. With k trits you can represent 3k states, which would require log2 3k = k (log2 3) bits. -- BenRG (talk) 16:55, 1 January 2013 (UTC)[reply]
OK, I see what you're saying. Let's run some numbers:
  P O S S I B L E
    S T A T E S
n   bits  trits
-   ----  -----
1      2      3
2      4      9
3      8     27
4     16     81
5     32    243
6     64    729
7    128   2187
8    256   6561
If we compare 35, which gives us 243 states, with 28, which gives us 256 states, that's about the same amount of states, with 8/5 or 60% more bits than trits. StuRat (talk) 19:30, 2 January 2013 (UTC)[reply]
As our article section on the representation of bits elaborates, current hardware uses the absolutely minimum discernable difference in some physical property - like the stored electric charge on a capacitor, or the variation in the reflectivity of a surface pit in a compact disc - to represent either 1 or 0. Almost by definition, if you introduced a different enumeration other than binary, reading each digit would require more power, or more spatial area/volume, or more sensitive electronics, or some other improvement in the resolution of the hardware, compared to what we have today: so, you'd lose energy efficiency, or your devices would be physically larger, or more expensive. From the viewpoint of logical memory addressability, each memory location would store more information; but from the viewpoint of physical memory, the storage and manipulation would be less effective and less efficient. This is the reason most of today's digital electronics opt for binary representation for all stored information. Nimur (talk) 04:01, 31 December 2012 (UTC)[reply]
So when these people say this: "Due to the low reliability of the computer elements on vacuum tubes and inaccessibility of transistors the fast elements on miniature ferrite cores and semiconductor diodes were designed. These elements work as a controlled current transformer and were an effective base for implementation of the threshold logic and its ternary version in particular [7]. Ternary threshold logic elements as compared with the binary ones provide more speed and reliability, require less equipment and power. These were reasons to design a ternary computer." That penultimate sentence is not possibly true? I wish I could visualize or that they had pictures of how exactly ferrite cores and diodes were configured to make a ternary hardware implementation. 67.163.109.173 (talk) 14:33, 31 December 2012 (UTC)[reply]
I read that page a few times. It uses English a bit imperfectly, but I think I finally interpret your quoted section to simply mean that ferrite cores are an improvement over vacuum tubes - whether used for binary digital logic or any other logic circuit. Were the circuits tuned to their optimal performance, the same rule would apply: binary is more information-dense, as I explained above. Needless to say, your article describes Soviet computers of the 1970s: experimental computer technologies, methods, and practices that were experimental in that era and that community generally don't apply to modern systems. For an overview, see Science and technology in the Soviet Union. Soviet computers are fascinating from a historical perspective. I have always found it interesting that the C compiler was (reportedly) unavailable in most of the U.S.S.R. until after the 1990s, which I have heard forwarded as the explanation for why post-Soviet programmers trained in the Russian education system still prefer Delphi and Pascal. This is also a reason why some experts considered Russia to be ten years behind in Computer Science on the whole. It's an interesting spin on "legacy code." Nimur (talk) 01:03, 1 January 2013 (UTC)[reply]
Interesting what you said about the Russians not being able to get a C compiler for so long. What would have stopped them from being able to get gcc, which Stallman offered very freely to anyone who wrote? Was gcc declared non-exportable by the US gov't? 67.163.109.173 (talk) 02:09, 1 January 2013 (UTC)[reply]
This is drifting far from the original question about ternary digital logic; but it's an interesting topic. It seems that nothing "prevented" access to GCC; but massive national/institutional inertia slowed its adoption. gcc was only first made public in the late eighties, and remember that even by 1990, there was not yet a "world wide web" nor widespread use of the Internet in Russia. HTTP had not yet been invented, let alone become widespread; and when predecessor network softwares were used, it was inconvenient and impractical for a Soviet programmer to peruse an American "website." Basic network connectivity between Soviet computer facilities was sparse. Connectivity to networks that hosted American and international content was even more rare. Long-distance international phone calls were expensive (and suspicious); "browsing" and "web surfing" were far less casual than today. Information about new software and methodology took much longer to disseminate; today, new software versions launch over timespans of just a few hours, but two decades ago, a software update rollout might take place over months or years. Furthermore, though today gcc is often the "obvious" choice of compiler, the GNU C compiler was hardly the "industry standard" in 1990 - especially in the U.S.S.R. Early GCC was buggy, largely unknown to most programmers, and only supported a few architectures. Soviet computers were built with weird processors; lucky well-funded researchers had such "personal computers" as Soviet Z80 clones or Japanese processors. IBM compatible machines, including all x86 architectures, were extremely rare. And you might take for granted that free Linux/Unix exists today; but in 1987, there was no Linux yet; and Unix was an American commercial software system designed to run on American computer hardware.
Finally, even when free (or commercial) C compiler software was functional and available, it was still not widespread among Soviet academics or industry programmers. Consider that today, with so much free software easily available via internet, at zero cost, most people still do not have a copy of gcc on their personal computers and devices. Neither cost nor ease of acquisition are the limiting factor, today. Even if software is readily available, a community of experience and expertise is needed for software adoption to become widespread. C was a language designed and used in the United States; despite its advantages for system-programming, it did not seem to get traction in the U.S.S.R. until quite a bit later.
To directly answer your question: I do not believe that gcc ever underwent regulatory oversight for export or ITAR considerations. Much kerfuffle was made during the 1980s and 1990s regarding free software implementations of cryptography, data compression, and hashing algorithms; but I don't think the compiler implementations ever drew much attention. Nimur (talk) 09:08, 1 January 2013 (UTC)[reply]
I think the reason modern computers are binary is simply that VLSI transistors lend themselves to binary logic but not ternary logic. That wasn't necessarily true of other computing technologies. -- BenRG (talk) 04:35, 1 January 2013 (UTC)[reply]
As a side note, what you're describing is known as balanced ternary. --Carnildo (talk) 04:31, 31 December 2012 (UTC)[reply]
All true, with minor historical exceptions, for memory and logic. Communication works under different economic constraints, resulting in many kinds of M-ary transmission. Jim.henderson (talk) 14:49, 31 December 2012 (UTC)[reply]
As well as the Setun computers you might be interested in Thomas Fowler (inventor). There's a picture of the stained glass window in St Michaels Church, Great Torrington showing his ternary calculator built out of wood at Torrington Museum Dmcq (talk) 01:50, 1 January 2013 (UTC)[reply]
Very cool! I also found a video of someone operating an apparently cardboard implementation of the machine here. 67.163.109.173 (talk) 03:27, 1 January 2013 (UTC)[reply]

Microphone humming[edit]

When I use a microphone (inserted into the audio jack of a desktop computer), it tends to pick up a very low frequency sound. Trying to muffle the mic with cloth, various forms of tapping the mic, and moving the mic further from the computer/any power outlets or changing its direction, etc. generally have no effect on reducing this. I did find that it harmonized very well (it sounded very much like the tonic) with Preparing the Chariots from the Hunger Games soundtrack (which I believe is in B major), which leads me to believe it is 60 Hz mains hum. Two methods I've found for making the hum stop are: 1. just waiting, and sometimes, the humming will spontaneously stop being picked up (this tends to be unreliable and take over an hour) and 2. sticking it into the metal bell of my bass clarinet and clanking it around for a few seconds, after which, when I remove it, the sound is no longer picked up 95% of the time. So my question is this: am I right to believe it's mains hum? and is it possible for someone to explain how does my second method works? Brambleclawx 04:35, 31 December 2012 (UTC)[reply]

I find it's usually something else on the same circuit. Just increasing the distance from the interference source can help, and perhaps you also did that with your 2nd method. A 2nd theory might be that there's something loose that vibrates, and shaking it around tends to move it out of position to vibrate. StuRat (talk) 04:39, 31 December 2012 (UTC)[reply]
Just plain shaking the mic doesn't seem to do anything though. (this isn't really an issue for me since I do have a solution that works. i'm just curious as to the source/reason behind how such an unusual method seems to work). Thank you for your hypotheses Brambleclawx 15:05, 31 December 2012 (UTC)[reply]
Perhaps there could be a build-up of static electricity in the microphone ? StuRat (talk) 22:01, 31 December 2012 (UTC)[reply]
Is the computer grounded? If not I suggest you do this. You can also try to connect the computer to the electrical grid through an on-line U.P.S.. Ruslik_Zero 14:58, 31 December 2012 (UTC)[reply]
I agree with Ruslik0 that proper grounding of the computer, peripherals, and anything else on the same circuit is definitely the first thing to check for. You should also make sure that the microphone cable doesn't run parallel to any power cords or near the power supply of your PC. 209.131.76.183 (talk) 18:56, 2 January 2013 (UTC)[reply]

Update for Microsoft Excel[edit]

Every morning when I start up Excel to activate a file, after I enter the password, Excel says it has stopped working and is looking for a solution. A moment later, it opens the file and asks for the password again and everything is fine. This behavior is very consistent. What's happening? I'm sure there is no hope of getting an upgrade, but is there anything I can do? --Halcatalyst (talk) 16:33, 31 December 2012 (UTC)[reply]

Could we be dealing with two levels of passwords ? Perhaps first it asks for a network password, then, when it determines it can't work in that manner, it asks for a standalone password, and continues in that mode. StuRat (talk) 22:04, 31 December 2012 (UTC)[reply]
No, that isn't it. But I forgot to mention it's Excel from Office 2010 running under Windows 7. --Halcatalyst (talk) 00:32, 1 January 2013 (UTC)[reply]

Pronunciation of NAS?[edit]

Is the acronym of Network-attached storage (NAS) usually pronounced like "nass" (rhymes with mass) or "naz" (rhymes with jazz)? --71.189.190.62 (talk) 19:39, 31 December 2012 (UTC)[reply]

From my brief review of youtube, it sounds like the Brits say "nazz" while North Americans say "nass" or "noss", although in most cases people say it so quickly it's difficult to understand which; there's some of hearing whatever you want in listening to these. You probably should move this to the Language section as opposed to the computing one. Shadowjams (talk) 21:57, 31 December 2012 (UTC)[reply]
I either say "network attached storage" or N-A-S (like F-B-I). StuRat (talk) 22:05, 31 December 2012 (UTC)[reply]

Pixels on tablets[edit]

Can you see pixels on nexus 7, ipad mini or kindle fire hd? I heard with high enough resolution you can't see pixels. 82.132.238.177 (talk) 23:51, 31 December 2012 (UTC)[reply]

Apple’s claim is that you won’t notice individual pixels at an ordinary viewing distance. So far I’m unaware of (the likely inevitable) serious competitors, although I can’t say individual pixels have really been bothering me for the past decade or so. If you see them, you’re probably putting your eyes too close to the screen. ¦ Reisio (talk) 01:21, 1 January 2013 (UTC)[reply]
Really? Many apple fans claim they see a big difference between retina and non retina? Is this partly just marketing hype then? There's no actual scientific research or anything on this? 82.132.218.163 (talk) 01:48, 1 January 2013 (UTC)[reply]
There’s definitely a difference, but if it were that great of one, you wouldn’t be asking these questions. ¦ Reisio (talk) 02:06, 1 January 2013 (UTC)[reply]
Well, being honest at normal reading distance, I can't see the difference between retina and non retina which is why I questioned it. 82.132.219.85 (talk) 02:35, 1 January 2013 (UTC)[reply]
It might be helpful if we could get some actual numbers, in terms of pixels per inch. StuRat (talk) 03:20, 1 January 2013 (UTC)[reply]
Ah, I see they are at the first link, which was renamed from Retina display. I typically notice pixels when a drop of water lands on the screen, acting as a tiny magnifying glass. I wonder if the Retina display can pass this test. Or, more importantly, if you draw a 1 pixel wide line on it at a 1 degree angle, will it have jaggies or look blurry ? StuRat (talk) 03:23, 1 January 2013 (UTC)[reply]
The resolution at the center of the fovea is 30 seconds of arc (source: visual acuity), or π/21600 radians, so the pixels of a display will be invisible at distances beyond roughly 21600/(πD) where D is the pixel density in dots per unit length. The screens of the iPad Mini, Nexus and Kindle Fire HD 7", and Kindle Fire HD 8.9" are 163, 216 and 254 dpi respectively (source: list of displays by pixel density). The corresponding minimum viewing distances are 1.1 m, 0.8 m, and 0.7 m. This may be an oversimplification, though. -- BenRG (talk) 04:19, 1 January 2013 (UTC)[reply]
Yes, what is displayed on the pixels certainly matters. If it's one solid color, it certainly will be harder to see the pixels than if it's a single-pixel-width line at a slight angle, in sharp contrast to the background. The screen brightness and contrast, as well as the ambient lighting, also matter. The screen surface also makes a difference, as some screens will blur the pixels more than others. StuRat (talk) 06:12, 1 January 2013 (UTC)[reply]
By the solid-color test even a 100-dpi panel's pixels are invisible at a typical distance. -- BenRG (talk) 20:06, 1 January 2013 (UTC)[reply]
That depends on the pixels. Some technologies seem to bump pixels right up against each other, while others have a black outline around them. StuRat (talk) 19:02, 2 January 2013 (UTC)[reply]
In practical use I'm certainly not ever aware of individual pixels or of any blockiness - but then one mostly views photographic images and anti-aliased text, where you wouldn't expect to notice even much larger pixels. I made up a sample 1280x800 test image, with an individual black pixel on a white background (and some one-pixel wide lines). With my reading glasses on, the single pixel is perceptible out to about 600mm. The one pixel wide lines are perceptible out to about 2000mm. Looking at that one black pixel, which is significantly less noticeable than the dozens of tiny flecks of dust and stuff that one always finds on such a screen, it's hard to imagine a significantly smaller pixel gauge yielding a worthwhile improvement in appearance. -- Finlay McWalterTalk 17:51, 1 January 2013 (UTC)[reply]
To test resolving power you should display alternating white and black lines and see how far away you can distinguish them from uniform gray. I just tried it on my 120-dpi laptop display and got a resolution of around 50 arc seconds (with prescription reading glasses to correct mild astigmatism). Based on this I probably could see the iPad Mini's pixels at a typical viewing distance, but it's marginal. My distance vision tested by a Snellen chart is a bit better, around 40 arc seconds. -- BenRG (talk) 20:06, 1 January 2013 (UTC)[reply]
Did you try rotating the 1 pixel wide line by a degree, to see if you can detect any jaggies or fuzziness where each jag would otherwise be ? StuRat (talk) 19:05, 2 January 2013 (UTC)[reply]