Jump to content

Wikipedia:Reference desk/Archives/Computing/2012 February 14

From Wikipedia, the free encyclopedia
Computing desk
< February 13 << Jan | February | Mar >> February 15 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 14

[edit]

Computer clock drift

[edit]

Why is it that computer clocks drift so quickly? Why aren't there (or are there?) quartz clocks to keep, or synchronize computer times? I notice drift on the order of 3 seconds per 24 hours on one of my machines... which seems extreme. This isn't a practical issue because I just synchronize to a time server, but why is it so dramatic in the first place? Shadowjams (talk) 00:20, 14 February 2012 (UTC)[reply]

Because the NMOS RTC is cheap crap and the CPU, which clocked very fast, isn't intended as an accurate timekeeper and these days is downright narcoleptic. Last time I had a Sun workstation (which was a decade ago) it kept time, even when off for weeks, like a Swiss watch - because its battery-operated TOD clock wasn't cheap crap. The very few PC users that really need accurate time (e.g. for stock trading or astronomy) install accurate clock cards (or radio-time cards). -- Finlay McWalterTalk 00:30, 14 February 2012 (UTC)[reply]
For a (to my mind rather fascinating) discussion about the problems of fixing clock drift with a naive NTP sync, and the more sophisticated steps that need to be taken to ease the adjustment in slowly, see this documentation about the Dovecot mail server. -- Finlay McWalterTalk 00:46, 14 February 2012 (UTC)[reply]
And I'd say the reason why so many PCs come with such crap clocks is because the clock accuracy isn't listed along with the essential stats (RAM, HD size, MHz, etc.). If it was, people would complain that they don't want to spend so much for a computer with such an inaccurate clock. StuRat (talk) 05:10, 14 February 2012 (UTC)[reply]
This seems strange when accurate clocks can be manufactured for a very few dollars. Do computers use rejects from the watch industry? Even the cheapest watches these days seem to achieve accuracy within 3 seconds per month. Three seconds in 24 hours seems extreme inaccuracy, or do they just not bother to regulate the oscillator? Dbfirs 09:24, 14 February 2012 (UTC)[reply]
They use the same 32.768Khz crystal as every quartz watch, but they feed this into a 32 bit counter which has to read values for over 100 years. So they downsample the oscillator to 1Hz and feed that into the counter (I'm looking at the datasheet for a Dallas DS1602, but they're much the same). If they had a 64 bit counter they could feed it clocks at 1024Hz, and you'd get great timing. But the clock chip wire line protocol is the same as it was on a PC-AT, and that's a 32 bit value covering ~125 years. Of course it would be trivial for them to implment a 64 bit, or bigger, counter (with very modest implications for power consumption off the battery) but they'd have to change the wire protocol, and so all the software downstream of it would need to change to. StuRat hits upon the reason they haven't bothered (because they've changed everything else in the decades since this was designed) - no-one is asking them to (even if they probably should be). 87.113.204.4 (talk) 11:11, 14 February 2012 (UTC)[reply]
So it's not so much "cheap crap" as "antiquated crap" 87.113.204.4 (talk) 11:47, 14 February 2012 (UTC)[reply]


Thank you all, especially 87.113's answer, which is exactly the kind of information I was curious about. I know that there's the "when off" clock, but the drift I see is on my servers, which are almost always on. So whatever drift I'm getting is happening from the system clock. Is the issue you're talking about 87.113 just for the offline clock, or does it relate to the internal OS timekeeping too? (I assume it's just the offline, but correct me if I'm wrong). Shadowjams (talk) 20:22, 14 February 2012 (UTC)[reply]
I believe they use the same clock for everything. It's true that when online, they could get time off the Internet, and, when plugged into an A/C socket, they could use the 60 Hz signal (US) for time-keeping. However, I don't believe they do (the second option would require that the clock plug directly into A/C, not just get D/C from the power supply). Regular (non-computer) clocks often use A/C timing, and, when power goes out and they run on the battery backup, they seem even more inaccurate than computer clocks. StuRat (talk) 00:29, 15 February 2012 (UTC)[reply]
While there are clocks that can use the A/C for timekeeping (there's a patent on it, I believe), it's pretty rare. It's more common for clocks to receive the NIST broadcast. There are some other factors that affect drift. When a clock is not on A/C, the battery voltage is likely lower (look into simple battery backup circuits if you're interested). That alone could cause it to drift more. In a computer, there's also ~3000 rpm fans adding to the vibrations (albeit a small factor). And probably the biggest contribution to drift in the computer is... the temperature. It's only supposed to be accurate at like 25C. If your computer runs at 50C, it's going to add a lot of drift. These "cheap" circuits and crystals can probably easily match the drift of cheap watches with the same conditions. --Wirbelwindヴィルヴェルヴィント (talk) 21:54, 16 February 2012 (UTC)[reply]
See this page. This isn't actually a very good test, but according to their results, 25C more, and the crystal will drift 1.7s more a day. --Wirbelwindヴィルヴェルヴィント (talk) 21:58, 16 February 2012 (UTC)[reply]
This problem sounds familiar (temperature too high inside the main body). Biologically, the answer is testicles. Perhaps we need a computer equivalent ? A clock card could be mounted outside the main case, similar to other peripherals like the mouse and keyboard, and connected with a USB port. You could even put it right in something like a USB stick drive. This would also make it far easier to replace the battery or entire thing. StuRat (talk) 18:04, 18 February 2012 (UTC)[reply]

Can I somehow "repartition" and burn a Blu-ray disk (BD-RE) to behave like two or more DVDs?

[edit]

Why I need this: When borrowing a series of films on DVD at my local library (for instance the eight Harry Potter films), I would like to view them in the correct order but there is a 7 day time limit on each loan, and not all films are available at the same time.
Thus I would like to make a temporary copy, on BD-RE, util I have them all (and can see them chronologically).
My question: How can I copy two or more DVDs onto a single BD-RE, including everything from the DVD (menues, extra material, various subtitle languages etc.) in such a way that I in the end may view them by VLC media player almost as if I still had each single DVD at hand ?
(I have a limited budget and therefore will have to rely on free software or standard features in Windows7 or maybe Ubuntu).
Is it possible?
--Seren-dipper (talk) 00:37, 14 February 2012 (UTC)[reply]

You're looking for a program like DVDFab, which can do a lot of things like what you're asking (I think it has a free trial version). However, it seems like you're going to more work than necessary. If you're just trying to be able to play back DVDs later, from your computer, you don't need to put them on a Blu-ray disc - you can put them your hard drive using a DVD decryption software (such as the free version of DVDFab, or something like DVD Decrypter, which is free, but no longer supported). If you just want to store the movie data on a Blu-ray disc so it doesn't take up space on your hard drive, you would decrypt the DVD (probably saving to your hard drive), and then use your Blu-ray burner to just burn the files to the Blu-ray disc, and delete them from your hard drive. If you want to be able to put your Blu-ray discs in a standard Blu-ray player (i.e. not one attached to your computer with VLC on it) and have them play, you'll need to be a bit more careful about it (using a program like DVDFab that's designed to do it would probably be best), and I'm not sure that you're going to be able to put multiple DVDs on one disc (but I might be wrong).
We don't give legal advice here, but in some jurisdictions, it may be illegal to circumvent technical copy restrictions on electronic media, including the Content Scramble System that is almost certainly used on the DVD's you are interested in. Buddy431 (talk) 04:23, 14 February 2012 (UTC)[reply]
Buddy is correct - circumventing copy restrictions is against the law, and the loan conditions with the library probably have something more to say about copying. Really, Buddy should not be detailing exactly how to do that. Why not simply borrow them in the correct order? If the next one is out on loan, I'm sure the library has a reservation precess so you can reserve it the next time it comes in (it wouldn't be any more than 7 days anyway). You then wouldn't have to copy them anywhere. Astronaut (talk) 12:22, 14 February 2012 (UTC)[reply]
A few years ago I asked the librarian about the legality of listening to music from CDs from the library. Most people today do not have any CD-Walkman (Discman), therefore one has to put the music on a MP3-player to get to listen to it, so are we all criminals then?
The answer was: "Yes! Technically, but it is what one is supposed to do, and as long as you erase the recording after you have listened to it for a while, then no one will mind".
@Astronaut : Of course there is a reservation procedure, which works great for "stand-alone DVDs", but when there is a group of DVDs that I need to see within a 4-5 day time frame, to make sense of them, and combined with inter library loan delay and multiple waiting lists then it seldom works out. ;-(
Because BD-REs are "criminally expensive" I will probably never have more than 3 anyway, so they will be erased after I have seen through them. Therefore I believe the same applies to DVDs, as to the above mentioned CDs, and I guess the DVD vendors will be happy about it because a DVD does not last for more than a handfull of loans from the library before it is worn out and has to be replaced (and the library do buy new copies of popular worn out DVDs!).
The pirates do not make formal registered loans at the library. Do they? (They just copy the whole thing off the internet, where no one has paid anything for it).
--Seren-dipper (talk) 19:49, 16 February 2012 (UTC)[reply]

Exploit:JS/Blacole.BV

[edit]

If my antivirus says it was successful at removing it, should I believe it or can this virus find a way to hide itself into the system that would require manual removal? 70.52.77.66 (talk) 05:18, 14 February 2012 (UTC)[reply]

Depends on the virus. However chances are if there was a known way to "manually" remove the virus your antivirus software would have incorporated it already. --145.94.77.43 (talk) 14:21, 14 February 2012 (UTC)[reply]

JAR files confusion

[edit]

I'm not a Java developer, but I'm searching for some equations in a .JAR file.

I've renamed .JAR to .ZIP and unzipped it. This creates a bunch of .class files. Good going so far. But when I open the .class files in XCode, they are basically empty. They contain variable and function/method definitions, but the definitions themselves are empty — there's no code in there.

When I open the .class file in a text editor, I can tell that there are all sorts of text and options I'm missing.

Why is this? Is the file compiled in some way that keeps me from seeing the actual code? Any way around this? Again, I'm not a Java developer, but I've used other programming languages before. --Mr.98 (talk) 17:56, 14 February 2012 (UTC)[reply]

Nevermind! I figured it out. I needed a Java decompiler. Done and done. --Mr.98 (talk) 18:01, 14 February 2012 (UTC)[reply]
(ec)The Java programming language is typically compiled into java bytecode by the java compiler (javac) which generates .class files (which are then in turn jar-ed up into JAR archives). .class files are binary archives that are only intended to be read by a java runtime system; they're not human readable, and they don't contain source code. So in this regard (as in many others) Java is quite unlike Javascript, for example. There are java de-compilers (which turn out to be a bit more successful than their counterparts for other compiled languages like C, because the .class file format provides quite a lot of rich information). I don't know anything XCode or what one should expect it to report about a .class file it's given - recovering the names of classes, interfaces, methods, and fields should be straightforward for tools in general to do, without resorting to actually reversing the bytecode itself. -- Finlay McWalterTalk 18:03, 14 February 2012 (UTC)[reply]
Well, anyway, I used JD-GUI and all of the equations were revealed to me. That's all I was really after. I can follow the equations. --Mr.98 (talk) 19:31, 14 February 2012 (UTC)[reply]

File sizes between OpenOffice and Microsoft Word

[edit]

I wrote a 15-page document in OpenOffice, and the overall size of the file was less than the number of characters. Yet, when I saved the document in Microsoft Word format, the file size increased to over four times the size. Why is this? Does OpenOffice's own format employ some sort of compression algorithm? JIP | Talk 19:28, 14 February 2012 (UTC)[reply]

OpenOffice documents are (now) OpenDocument format ZIP files; rename one to .zip and you can open it with a normal archive program. As with other zips, they can be (and usually are) compressed. -- Finlay McWalterTalk 20:44, 14 February 2012 (UTC)[reply]
This, for example, is a great way to extract images (if someone has unhelpfully sent you a bunch of photos embedded in a word document). You open it with OpenOffice/LibreOffice, save as an .odt, rename that to .zip, open it with winzip or file-roller, and there's a folder in there with all the images. -- Finlay McWalterTalk 20:47, 14 February 2012 (UTC)[reply]
I think the same is true of the new Microsoft .docx format. -- Q Chris (talk) 20:57, 14 February 2012 (UTC)[reply]
Compression for an XML format seems like a very good idea, in general, given its propensity to verbosity. Here (pretty printed) is a single cell entry from the content.xml of an OpenOffice .ods file I just examined:
  <table:table-cell table:formula="of:=[.E79]-[.E72]" office:value-type="float" office:value="-0.285714285714278">
    <text:p>-0.3</text:p>
  </table:table-cell>
which is quite a mouthful. -- Finlay McWalterTalk 21:18, 14 February 2012 (UTC)[reply]

USB 3.0 external hard drive

[edit]

This isn't really a question, but a follow-up to a question I asked Feb 10.

I put a two-port USB 3.0 card in my computer and added a USB 3.0 external hard drive (3 TB). I put my USB 2.0 external hard drive (1 TB) on the other port and compared the read speeds of those two drives and my two internal drives (2 TB each) on a file that is a little less than 2GB.

  • C drive: 8.8 seconds
  • Second internal: 8.3 seconds
  • USB 2.0 ext: 28.6 seconds
  • USB 3.0 ext: 7.3 seconds

So the USB 3.0 is nearly 4 times as fast as the 2.0, but what surprised me is that it is faster than my internal drives. Of course, the new external is almost clean and the others are not, and times are affected by fragmentation and where on the disc it is. Bubba73 You talkin' to me? 23:52, 14 February 2012 (UTC)[reply]

Yes, a write test with a filesystem may be an imperfect comparison, when what you're interested in is the data rates of the connection rather than of the disks. If you were willing to boot into a Linux live cd, the following command does a non-destructive read-speed test of the physical disk surface (ignoring the filesystem layer altogether):
    sudo time  dd if=/dev/sda of=/dev/null  count=2097152
(where sda, sdb, sdc etc. are the disks). -- Finlay McWalterTalk 00:09, 15 February 2012 (UTC)[reply]
I tested this with a typical file that I will be reading many times. Bubba73 You talkin' to me? 00:15, 15 February 2012 (UTC)[reply]
This is a bit unclear to me. Do you mean you read it many times (i.e. a duplication of results)? If so, did you ensure caching wasn't an issue? (If you didn't do duplication I would suggest you results are basically a bit useless, particularly for such small differences.) In any case, I wouldn't consider a test running 7-9 seconds even with duplication to be that great. As you say there is also the issue of fragmentation and location (if you have a lot of stuff on your C drive, do remember that HDs are fastest at the beginning and slow down over the disk). Nil Einne (talk) 20:42, 15 February 2012 (UTC)[reply]
Yes, I rebooted each time to clear out the cache, otherwise it reads the file (from cache) in under 0.1 seconds. Over their lifetime, these files will be read many times, but they usually won't be in the cache. The computer I tested on has 16GB of RAM, so it can keep a moderately big file in cache, but some of the computers the program will be running on have only 4GB. Bubba73 You talkin' to me? 22:04, 15 February 2012 (UTC)[reply]

That shouldn't necessarily be a surprise. USB 3 is rated faster than certain implementations of SATA. ¦ Reisio (talk) 13:17, 15 February 2012 (UTC)[reply]

Yes, but my computer is pretty new (about 8 months old) so I thought it would have a good SATA, but maybe it doesn't. Also, the USB 3.0 card I installed is a PCIe x1, which I don't think is as fast as USB 3.0. (Looks like they would make SATA to be as fast as the HD.) Bubba73 You talkin' to me? 16:17, 15 February 2012 (UTC)[reply]
I think this is going off the main point. Last time I checked (and it's been a while) I don't believe even the fastest 7200RPM hard drives are capable of saturating a SATA150 link. (SSDs sure.) So it doesn't really matter if USB3 is faster then some varieties of SATA if neither are capable of saturating the link. (Although as I've mentioned before, historically USB has been significantly slower then their theoretical maximum.) More importantly, beyond the points raised above, you are comparing different disks. While they all sound fairly new (except possibly for the USB2.0) based on capacity, the speed may still vary.
For example, is it possible the SATA disks are the 'green' variety which tend to spin at a slower speed (and therefore have a slower sequential read speed as well as seek times). I don't know whether they put green or performance/7200RPM hard disks in USB3.0 externals, I would guess it depends on the HD. Besides, even if they are both green, IIRC 500GB platter 2TB hard disks were fairly common. And someone (WD? Hitachi?) did produce a 5 platter 2TB. Even if the 3TB is a 5 platter (which seems to be the maximum they use in 3.5" HDs) it's still 600GB per platter. And Seagate evidentally are producing 1TB platters [1].
While platter density isn't perfectly correlated with transfer speed, there usually is some correlation for obvious reasons. In other words, there's a good chance all this test is saying is your new 3TB USB 3.0 HD faster then the older 2TB one. And perhaps the USB 3.0 interface isn't limiting it.
If you really wanted to test this however, what you'd need to do is to disassemble the external disk since it almost definitely has a SATA drive inside and compare it. However this would almost definitely void the warranty. Of course if the external disk has ESATA you could test that.
You may also want to use a tool like the freeware CrystalDiskMark [2], HDDScan [3] or something on *nix to better test you disk.
Nil Einne (talk) 20:42, 15 February 2012 (UTC)[reply]
HDDScan didn't seem to have speed tests. I downloaded Crystal, but I couldn't find any speed tests in it either. It did tell me:
Did you download the wrong thing? CrystalDiskMark is not the same as CrystalDiskInfo but is offered at the same download page so it's easy to get the wrong thing. CrystalDiskMark speed tests should be fairly obvious since that's what the main interface is about. I've never actually used HDDScan but it's something recommend as another tool and the page on the tool showed a page with benchmarking details. I have also used HD Tune but it's not freeware. There was also HD Tach but it's getting a little bit long in the tooth and isn't even officially supported any more [4]. Nil Einne (talk) 11:53, 16 February 2012 (UTC)[reply]