Wikipedia:Reference desk/Archives/Computing/2014 November 15
Computing desk | ||
---|---|---|
< November 14 | << Oct | November | Dec >> | November 16 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
November 15
[edit]Blocking web mail traffic to a specific version of Outlook?
[edit]Is there anyway a network administrator can distinguish between the traffic going between Outlook 2010 vs Outlook 2013 to the mail server? That is to say, is there anyway the admin can prevent users from using Outlook 2013 to retrieve emails from the company servers and force users to use Outlook 2010? Acceptable (talk) 17:20, 15 November 2014 (UTC)
- The standard mail protocols, such as IMAP, do not require the client software to report anything. You can see the IMAP exchange on Internet Message Access Protocol. It is plain text. Now, if you limit users to an off Outlook protocol that only works with Outlook and is inherently broken for everyone who isn't using Outlook, then you can do anything you like. This is an ongoing security issue. The security management wants to make email secure by ensuring that nobody can send or receive email. The business management wants email to simply work without any sort of security issues at all. Smart businesses fall in the middle somewhere. 209.149.113.112 (talk) 18:13, 17 November 2014 (UTC)
Frequent Computer Sleep
[edit]I bought a laptop (Lenovo Yoga 2) a little while back for use in the classroom. I haven’t had an occasion to use a personal laptop for years, so I have been predominately using an iPhone (usually just small tasks like checking emails, social media, etc.). I had become very accustomed to being able to turn the screen off in the short periods of time where I wasn’t using my phone. In other words, if I wasn’t using my phone for 5 minutes or so, there is no reason to shut it off, no reason to have the screen on, so I would tap the top button to shut the screen off (lock the phone). Now that I’m back to using a laptop for full days, I’m looking to maximize my battery life.
In short, I’m wondering if it’s bad to routinely put my laptop to “sleep” while I’m not using it in order to save battery life. Note that I use my laptop in tablet mode as well, so it’s not just as simple as closing the screen. Is it bad for my computer to put it to “sleep” 5 or 6 times each hour?72.10.109.131 (talk) 18:11, 15 November 2014 (UTC)
- I set my laptop to go to sleep if I haven't pressed a key for several minutes. I don't know of any detrimental effects. If you are leaving the machine unattended in the classroom, you might also want to set a password on reawakening (depending on how mischievous your students are). Dbfirs 21:07, 15 November 2014 (UTC)
- You can set your laptop to automatically turn off the backlight of the display, but otherwise remain on, after a short period of time. The minimum is probably 1 or 5 minutes. There should be Fn-Key sequence to do this manually; it is probably Fn-F9 for Lenovo Yoga 2 Pros. CS Miller (talk) 13:49, 17 November 2014 (UTC)
- (OP here) that's exactly what I was looking for, thanks! I realize now that my initial attempts to learn how to turn of the display backlight (via google) were hidden by the many ways to turn of the keyboard backlight (which I don't even have). Thanks!24.181.250.51 (talk) 00:10, 19 November 2014 (UTC)
- Turning off the display will certainly save some battery life, but if you want your battery to last all day, you may need to look at the sleep and hybernate options too. Dbfirs 12:53, 19 November 2014 (UTC)
- (OP here) that's exactly what I was looking for, thanks! I realize now that my initial attempts to learn how to turn of the display backlight (via google) were hidden by the many ways to turn of the keyboard backlight (which I don't even have). Thanks!24.181.250.51 (talk) 00:10, 19 November 2014 (UTC)
- You can set your laptop to automatically turn off the backlight of the display, but otherwise remain on, after a short period of time. The minimum is probably 1 or 5 minutes. There should be Fn-Key sequence to do this manually; it is probably Fn-F9 for Lenovo Yoga 2 Pros. CS Miller (talk) 13:49, 17 November 2014 (UTC)
Removing & Inserting
[edit]Once you 'Safely Remove Hardware and Eject Media', how do you reactivate it without unplugging it to re-plug it to function?
Can someone help me to with the steps please, what to do... Regards.
(Russell.mo (talk) 18:35, 15 November 2014 (UTC))
- Microsoft says just unplug and replug. Perhaps someone knows a way to reactivate the device driver when unplugging is inconvenient? Dbfirs 21:00, 15 November 2014 (UTC)
- I have had to do this when remoted in. You can usually open device manager and scan for new hardware. On occasion I have had to reboot. BTW, you can also right click on the device and eject to safely remove it. -- Gadget850 talk 01:32, 16 November 2014 (UTC)
- Device Manager doesn't re-find the hardware when it scans in either Vista or Windows 7 (I haven't tried in more recent versions of Windows). I also tried disabling the USB driver, but there's no point because re-enabling requires a reboot. There ought to be some command to fool the driver into thinking that the device has been disconnected, but I don't know enough about the inner workings of Windows drivers to find it. Dbfirs 12:47, 16 November 2014 (UTC)
- Unable to find 'scan for new hardware' option, though found 'scan for hardware changes' option - doesn't work... I constantly unplug/remove the USB wire after disconnecting it from the taskbar. I thought there would be a way of reconnecting it without unplugging it. Note: Disconnecting it from the taskbar is safer then disconnecting it from, by right clicking it from the removable device icon. -- (Russell.mo (talk) 13:24, 16 November 2014 (UTC))
- [citation needed] Asmrulz (talk) 14:55, 16 November 2014 (UTC)
- I'm fairly sure the claim is incorrect. They both do the same thing so the idea one is safer is nonsense, unless you count accidentally disconnecting the wrong thing, forgetting to disconnect, or not noticing it failed to disconnect, but I actually think these generally lean in favour of disconnecting it by right clicking on the device. OTOH, some devices (such as a GPT external disk) won't give an eject option, so there are some non safety related reasons why you might want to use the icon in the notification area. Nil Einne (talk) 05:18, 17 November 2014 (UTC)
- BTW, a simple search finds these links [1] [2] [3] which suggests you may be able disable and renable the device to convince Windows to reinitalise the drivers, although it may not work on Windows 7 (whic may mean Vista too), only XP and Windows 8.1 (I imagine 8 too) and you may need to do it twice on Windows 8.1. Windows does ask you to restart, but doesn't seem to require it (i.e. just tell it no and you may be ok). You could try this with the disable USB option controller too and see if it works. Nil Einne (talk) 05:29, 17 November 2014 (UTC)
- I think the reason for saying the taskbar icon is safer is probably that it forces the user to maually check and close what is still running, whereas the eject option on rightclick forces closure regardless. I agree that the safely remove option seems to call the same routine by both routes. Despite
Nimur'sNil Einne's excellent suggestions, I've not found a way to achieve reconnection without downloading Devcon and using command line. Dbfirs 09:16, 17 November 2014 (UTC)- (Dbfirs, I don't actually recall making any suggestion in this discussion, excellent or otherwise... are you referring to an earlier discussion or a suggestion from a different user? Nimur (talk) 17:48, 17 November 2014 (UTC))
- Oops! Apologies, Nimur. You do make excellent suggestions elsewhere, but it was a case of mistaken identity here. I've corrected my silly mistake. Dbfirs 00:59, 18 November 2014 (UTC)
- (Dbfirs, I don't actually recall making any suggestion in this discussion, excellent or otherwise... are you referring to an earlier discussion or a suggestion from a different user? Nimur (talk) 17:48, 17 November 2014 (UTC))
- @Asmrulz: Some time ago, I thought both functioned the same way, even unplugging the USB drive manually was the best option for me without safely removing it. Ever since I started valuing my files and folders, I came to realisation, when I eject the USB drive by right clicking it, the flash drive keeps the flash light on, even after when it say ‘you can safely remove it’. When I safely remove it from the taskbar’s notification area, it seems to turn the flash light off. I hope this helps. – experienced with 8GB USB drive and 1TB RHDD. -- (Russell.mo (talk) 10:18, 17 November 2014 (UTC))
- As I mentioned above, right-clicking gives two options: eject which forcibly closes any open files, with the possibility of loss of data, and safely remove which is the same as the task bar option. I have removed USB drives hundreds of times in the past without bothering to eject or safely remove, and I've never had problems because I always closed my files manually first. Nevertheless, I would recommend the safely remove option because it checks for you that no files are still open. Dbfirs 12:53, 17 November 2014 (UTC)
- Asmrulz needed citation therefore I stated. Sorry, I should've pinged. There is only one 'Safely Remove Hardware and Eject Media' option btw and one 'eject' option, what you find after right clicking on the RDD icon. The other options are available in the 'Device Manager' window.
- Anyways, I tried the link what Einne provided, it functions like the way DFbris (as well as Einne) mentioned. It doesn't reconnect until a reboot... -- (Russell.mo (talk) 13:49, 17 November 2014 (UTC))
- As I mentioned above, right-clicking gives two options: eject which forcibly closes any open files, with the possibility of loss of data, and safely remove which is the same as the task bar option. I have removed USB drives hundreds of times in the past without bothering to eject or safely remove, and I've never had problems because I always closed my files manually first. Nevertheless, I would recommend the safely remove option because it checks for you that no files are still open. Dbfirs 12:53, 17 November 2014 (UTC)
- I think the reason for saying the taskbar icon is safer is probably that it forces the user to maually check and close what is still running, whereas the eject option on rightclick forces closure regardless. I agree that the safely remove option seems to call the same routine by both routes. Despite
- [citation needed] Asmrulz (talk) 14:55, 16 November 2014 (UTC)
- Unable to find 'scan for new hardware' option, though found 'scan for hardware changes' option - doesn't work... I constantly unplug/remove the USB wire after disconnecting it from the taskbar. I thought there would be a way of reconnecting it without unplugging it. Note: Disconnecting it from the taskbar is safer then disconnecting it from, by right clicking it from the removable device icon. -- (Russell.mo (talk) 13:24, 16 November 2014 (UTC))
- Per MS, eject and safely remove have the same function.[4] -- Gadget850 talk 14:27, 17 November 2014 (UTC)
- I haven't done this in a while, so I checked my notes. Looks like I restarted the USB Mass Storage Device, which is described at 5 Ways to Remount Ejected or Safely Removed USB Device Without Unplug and Reinsert. -- Gadget850 talk 15:18, 17 November 2014 (UTC)
- Thanks, that confirms that it is impossible to remount a USB drive in Vista and Windows 7 without downloading special software. Despite what Microsoft says in your link, eject is not the same as safely remove. They are separate options on right-click in Vista, and behave slightly differently in Windows 7, though I accept that eject probably calls the safely remove routine after closing files. Dbfirs 00:53, 18 November 2014 (UTC)
- No luck with my OS either.
Thank you all. I appreciate. Regards. (Russell.mo (talk) 18:52, 18 November 2014 (UTC))
Speech recognition
[edit]Is anyone fully experienced with the ‘speech recognition’ software? I say something and it writes something else. How can I make it affective? -- (Russell.mo (talk) 18:41, 15 November 2014 (UTC))
- You can make it more effective by training it to recognise your speech patterns. I haven't used such software for many years, but doesn't it still require several training sessions to accurately reproduce your words? You don't tell us which brand of software you are using, so we can't check exactly what correction facilities are built in. Dbfirs 20:48, 15 November 2014 (UTC)
- Yea, the only form of speech recognition that really works well is if it only has to differentiate between a small group of words, like a telephone answering system. When you get to many thousands of words it has to tell apart, it usually fails spectacularly, although training it to your voice helps somewhat. StuRat (talk) 22:01, 15 November 2014 (UTC)
- It's at least twelve years since I last spent time training speech recognition software to my voice (and used it to dictate minutes of meetings — a fairly wide vocabulary). I had limited success. For some sentences it was spot on, but for others it made an absolute mess of recognition. I had supposed that some progress might have been made in twelve years, but it seems that the problems are not easy to solve. Dbfirs 23:20, 15 November 2014 (UTC)
- Part of the problem is that we all read lips to some extent, as demonstrated by the McGurk effect, so ideally any speech recognition system would need to do the same. (Of course, people do manage to communicate by phone, but misunderstandings are likely more common there.) Then there's some artificial intelligence built into human speech recognition. If we think we hear "I'm going to the store to fry a loaf of bread", that doesn't make any sense, so we mentally correct "fry" to "buy". A good speech recognition system would need to be able to do that, too. StuRat (talk) 01:02, 16 November 2014 (UTC)
Its MS Windows 7 Ultimate, 'Speech Recognition' software, what you can find in the 'Control Panel'. -- (Russell.mo (talk) 13:24, 16 November 2014 (UTC))
- Part of the problem is Speech segmentation: A popular example, often quoted in the field, is the phrase "How to wreck a nice beach", which sounds very similar to "How to recognize speech". Vespine (talk) 23:35, 16 November 2014 (UTC)
- The short answer is, it's an extremely complex problem and a continuing very active field of research. Vespine (talk) 23:37, 16 November 2014 (UTC)
- If you haven't tried, obviously if you can't train the software to work better with your voice, you could try to train your voice to work better with the software. Try to pronounce your words clearly and distinctly and possibly with an American English accent. Vespine (talk) 23:38, 16 November 2014 (UTC)
- I've tried it, it behaves like me, always does something else. I try to speak appropriately, still it fails... Is there any software I can download which will function properly? I mean I tried one twelve years ago, it worked brilliantly, my English was worse than how it is now. I was new to computers that time, I forgot the name of the software... -- (Russell.mo (talk) 10:32, 17 November 2014 (UTC))
- If you haven't tried, obviously if you can't train the software to work better with your voice, you could try to train your voice to work better with the software. Try to pronounce your words clearly and distinctly and possibly with an American English accent. Vespine (talk) 23:38, 16 November 2014 (UTC)
- The short answer is, it's an extremely complex problem and a continuing very active field of research. Vespine (talk) 23:37, 16 November 2014 (UTC)
- Part of the problem is Speech segmentation: A popular example, often quoted in the field, is the phrase "How to wreck a nice beach", which sounds very similar to "How to recognize speech". Vespine (talk) 23:35, 16 November 2014 (UTC)
- The best "I talk. It types." software comes from Nuance (used to be known as Dragon or Dragon Naturally Speaking). Anytime someone has a worthwhile competitive product, Nuance buys them. So, you are stuck with Nuance. It is not anywhere close to free. If you don't want to buy it, you can't have it. 209.149.113.112 (talk) 18:08, 17 November 2014 (UTC)
- I've just remembered that the success I had with free software was achieved with a separate microphone in a quiet room (and only after what seemed like hours of training). Dbfirs 01:04, 18 November 2014 (UTC)
- I have to work with free products atm. Good to know though. Thanks 209.149.113.112. I guess you are right in many ways Dbfirs, I recall possessing a separate microphone and headphone twelve years ago as well as a quite area while I was using a 'speech recognition' software. I'm not doing any of it right now. I have a built-in microphone in a headset, a stick that extends right next to my lips. I guess your suggestion might do the trick with Windows 'Speech Recognition' software; an advantage it possess i.e you can train it... Thanks! -- (Russell.mo (talk) 18:51, 18 November 2014 (UTC))
Thank you all -- (Russell.mo (talk) 18:51, 18 November 2014 (UTC))
Flac to wav
[edit]I know that if I were to convert a wav audio file to an mp3, the sound quality of that will be lost forever, meaning that it won't be regained even if I were to try to convert the mp3 file back to the wav format. However, I read that flac and wav formats are different in that flac files are the compressed versions of wav files. Flac files have all the information that a wav file, but in a more compressed scale. However, I've read that wav files tend to sound better for some people. If that's the case, would that mean that the sound quality will be gained back if I were to convert the flac file back to a wav, or would the sound quality of the wav be lost forever as well even if I were to turn the flac file back to a wav, and if so, why? Willminator (talk) 18:42, 15 November 2014 (UTC)
- I can think of only three reasons why the FLAC file would sound worse:
- The FLAC decoder/player is broken.
- The original WAV file used a format that isn't supported by FLAC, such as floating-point samples, so the conversion to FLAC actually lost information.
- They're imagining it.
- In the first case, you can get the original sound quality by using a non-broken decoder. In the second case, the information is already lost and can't be recovered by any decoder. The third case is the most likely. -- BenRG (talk) 05:44, 16 November 2014 (UTC)
- In this case, I tend to agree with BenRG: the user is probably imagining a sound difference. A lot of audiophile claims can be debunked by performing a double blind experiment! But if there really is a perceivable difference, then there is an error in the software. The error might be very subtle, like a rounding-error or a conversion mistake; or something egregious, like a buffer overrun or a real-time performance problem in the player; but when implemented correctly, a wave-form that has been compressed and then decompressed using FLAC is bit-for-bit identical to its original waveform. This hypothetical "error," if it existed, must have slipped past all the FLAC developers, who have strongly asserted that there is no such error.
- You can run FLAC with a verbose flag to perform a bit-for-bit check on input and output. If this is insufficient, you can inspect the code yourself. The code base is "fairly small," but I doubt I could verify it in just one afternoon. Personally, I would not try to read the code in its entirety, but I can envision building a version that let me write my own verification suite at key points along the signal's "critical path," (if I were so inclined).
- Nimur (talk) 16:49, 16 November 2014 (UTC)
- It's definitely not FLAC itself, they have done bit-for-bit comparisons of WAV and WAV-to-FLAC-to-WAV files - and they are indeed identical. It could conceivably be the player - but then there are a large number of WAV players too...if there could be errors in the FLAC player, then there might be similar errors in *some* WAV players. So it would be incorrect to say that WAV sounds better than FLAC - merely that one player sounds better than some other that just happens to load FLAC.
- Audiophiles are really becoming tiresome people these days. They used to be people with huge, deep, interesting knowledge of how to squeeze the best from turntables, tape decks and imperfect amplifiers...it was a great hobby. You could argue endlessly about whether putting this head on that turntable with this amplifier and those speakers were better...than whatever the other person had because everything was a compromise of some kind or another. Then, almost overnight, we got digital audio with precisely perfect storage and essentially perfect reproduction - far better than the human ear can resolve - and at a price that even someone who doesn't give a damn about quality is happy to pay. Now those people have nothing left to discuss. So they'll pontificate about the relative merits of things like FLAC and WAV - even though there is no real difference. These are the same people who'll tell you that a $100 USB cable with gold plated connectors produces "more lively vocals with deeper bass resonance overtones"...which is without any shadow of a doubt, complete nonsense. There is no doubt that lossy formats like MP3 do sound SLIGHTLY different from WAV or FLAC - but with a lossless format, there is no way for anyone to tell. SteveBaker (talk) 01:43, 17 November 2014 (UTC)
- I mostly agree with SB although I may additionally note that many people with decent equipiment can't tell the difference between sufficiently high bitrate MP3 (which with more modern lossly compression often doesn't have to be that high) with ABX testing. This isn't of course to deny it's frequently possibly by expert listeners with sufficiently good equipement even with very high bitrates. See also Transparency (data compression). Of course, probably at least partially due to the widespread popularity of such lossy formats, some people actually prefer the lossy version when they can hear a difference. See e.g. [5] (I'm surprised no one pointed out that it's fairly unlikely anyone prefers 128-bit (rate) MP3s) but also [6] (only the first part, ignore the second part about 192kb/24bit which appears mostly nonsense as the only commentator pointed out). It's particularly funny when someone who considers themselves an audiophile does so unexpectedly. Actually I'm surprised there isn't a movement like there is for analog limitations and distortians. Nil Einne (talk) 06:18, 17 November 2014 (UTC)
- BTW, the hydrogen audio forums [7] and wiki are a IMO a good place for reading discussions related to a lot of things audio as they generally tolerate little nonsense. For example, anyone making a claim about being able to hear something generally at least needs an ABX result or they're given short shrift. Also the 192kb/24bit claim on they audiophile site I linked to is a useful reminder of the problem with many audiophiles. 24bit audio or at least 20 bit audio could theoretically (although evidence for cases when it does is limited) provide detectable sound differences than 16 bit in really extreme cases. Yet many audiophiles are convinced you need 192khz as well, even though it's fairly unlikely any of them can really hear abour 22.05khz (let alone 24khz which 48khz will provide and since they aren't cats the idea 96khz/48khz isn't enough which many of them seem to have, is just silly). Nil Einne (talk) 07:00, 17 November 2014 (UTC)
- Also I was reminded of something from this article [8]. My understanding is that although the move to widespready availability of HD audio offers limited technical benefit (except in extreme cases where you really gain an advantage from more than 16 bit, or I guess if you are a cat :-P), there has been some benefits. For older content it may mean remastering using better techniques or equipment. For new content, audio engineers are sometimes less free with compression and other techniques used because they are told or believe they are favoured by the majority, but which many audiophiles (both those who sprout nonsense and those who don't) dislike. Then again, reading a bit more I'm also reminded this demonstrates another problem of the claims of the nonsense audiophiles. A lot of the content they claim is superior with HD audio for technical fidelity capability (and not mastering choice/quality) was recorded with and on equipment and material that probably doesn't even achieve 22.05khz/16 bit (93db or whatever) fidelity, so any alleged advantages can only come from mastering interpolation. I should clarify that I'm only thinking of what format is necessary to relay the audio to a human listener. I'm not denying there may be reasons to capture these details or to use higher fidelity particularly in the mastering stage for various reasons such as rounding errors or if you have other intentions (like if you do want your cat to listen to your audio). E.g. there's one mentioned at the end of [9]. Nil Einne (talk) 07:09, 17 November 2014 (UTC)
- This article says that testing have shown that wav tend to sound better than flac. At least some people may not be imagining a sound quality difference. Willminator (talk) 19:00, 17 November 2014 (UTC)
- BTW, the hydrogen audio forums [7] and wiki are a IMO a good place for reading discussions related to a lot of things audio as they generally tolerate little nonsense. For example, anyone making a claim about being able to hear something generally at least needs an ABX result or they're given short shrift. Also the 192kb/24bit claim on they audiophile site I linked to is a useful reminder of the problem with many audiophiles. 24bit audio or at least 20 bit audio could theoretically (although evidence for cases when it does is limited) provide detectable sound differences than 16 bit in really extreme cases. Yet many audiophiles are convinced you need 192khz as well, even though it's fairly unlikely any of them can really hear abour 22.05khz (let alone 24khz which 48khz will provide and since they aren't cats the idea 96khz/48khz isn't enough which many of them seem to have, is just silly). Nil Einne (talk) 07:00, 17 November 2014 (UTC)
- I mostly agree with SB although I may additionally note that many people with decent equipiment can't tell the difference between sufficiently high bitrate MP3 (which with more modern lossly compression often doesn't have to be that high) with ABX testing. This isn't of course to deny it's frequently possibly by expert listeners with sufficiently good equipement even with very high bitrates. See also Transparency (data compression). Of course, probably at least partially due to the widespread popularity of such lossy formats, some people actually prefer the lossy version when they can hear a difference. See e.g. [5] (I'm surprised no one pointed out that it's fairly unlikely anyone prefers 128-bit (rate) MP3s) but also [6] (only the first part, ignore the second part about 192kb/24bit which appears mostly nonsense as the only commentator pointed out). It's particularly funny when someone who considers themselves an audiophile does so unexpectedly. Actually I'm surprised there isn't a movement like there is for analog limitations and distortians. Nil Einne (talk) 06:18, 17 November 2014 (UTC)
- Audiophiles are really becoming tiresome people these days. They used to be people with huge, deep, interesting knowledge of how to squeeze the best from turntables, tape decks and imperfect amplifiers...it was a great hobby. You could argue endlessly about whether putting this head on that turntable with this amplifier and those speakers were better...than whatever the other person had because everything was a compromise of some kind or another. Then, almost overnight, we got digital audio with precisely perfect storage and essentially perfect reproduction - far better than the human ear can resolve - and at a price that even someone who doesn't give a damn about quality is happy to pay. Now those people have nothing left to discuss. So they'll pontificate about the relative merits of things like FLAC and WAV - even though there is no real difference. These are the same people who'll tell you that a $100 USB cable with gold plated connectors produces "more lively vocals with deeper bass resonance overtones"...which is without any shadow of a doubt, complete nonsense. There is no doubt that lossy formats like MP3 do sound SLIGHTLY different from WAV or FLAC - but with a lossless format, there is no way for anyone to tell. SteveBaker (talk) 01:43, 17 November 2014 (UTC)
- The problem is (and the article actually says as much) that the digital content in FLAC file is bit-for-bit identical to the WAV. And, it goes on to explain (as I did, above) that the difference must be due to the software they used to replay the FLAC and the WAV. These are obviously not identical - so there is a possibility of error in one or the other. Well if that's true - then the problem is that there are MANY WAV players - not just one. So if the WAV player they tested produced different results from the FLAC player they tested - then what they are comparing isn't the file formats - it's the software that's doing the replay. It's very possible that there are different software replay modules for WAV that sound different from each other. Once you know that the two file formats contain the same data, the argument over which is better is done...they are precisely equally "good". What you should be arguing about is which replay software is better...which says nothing whatever about file formats. There may be better FLAC players and worse WAV players than the pair they happened to use. That said, I don't buy that either because the article said that the PCM streams were identical too...they argued that CPU time spent doing decoding was the culprit - but that's bullshit because the timing of the data into the audio DAC is driven by the sound card, not the CPU. If the CPU doesn't make it, you get gigantic and very, very obvious clicks...not so, so subtle differences that only golden-eared audiophiles can pick it up.
- So this is still very typical audiophile nonsense. SteveBaker (talk) 17:30, 18 November 2014 (UTC)
- Actually, the article makes a good point: computers make noise when they're computing, and that could contaminate the output. You'd have to do a blinded test to be sure, but it sounds more plausible than my first two suggestions. -- BenRG (talk) 19:25, 18 November 2014 (UTC)
- Wow! That's a bit of a stretch...but I suppose you *might* find that the additional workload of unpacking a FLAC might make the computer work harder and consequently run hotter and consequently turn the cooling fan on - which would consequently pollute the sound coming out of the machine...but even if that were remotely true (and assuming that their experimental setup was dumb enough to make it 'count') - then it's still bogus. I can run the FLAC player on (say) a RaspberryPi - which can be run off of batteries and makes no noise whatever. But EVEN if we're super-generous to the reviewer and agree that a computer with a fan is somehow a part of the test setup - you're still not commenting on the sound quality of the FLAC file format - only on the performance of the replay software. If I wrote a WAV player that (for some reason only known to me) computes the first 1000 digits of pi between each pair of sound samples it replays - then it would get the CPU running hotter than the FLAC player.
- So this is still BOGUS! The FLAC file format cannot, under ANY circumstances be said to produce worse audio...in the sense that MP3's most certainly can...you can only possibly be comparing the quality of the replay software - and then you have to say which, of the thousand or so WAV players out there you are comparing to which of the half dozen FLAC players. If you believe you can hear the difference between two sets of sound replay software - then your problem is nothing to do with file formats. SteveBaker (talk) 19:35, 21 November 2014 (UTC)
- Actually, the article makes a good point: computers make noise when they're computing, and that could contaminate the output. You'd have to do a blinded test to be sure, but it sounds more plausible than my first two suggestions. -- BenRG (talk) 19:25, 18 November 2014 (UTC)