Jump to content

Wikipedia:Reference desk/Archives/Computing/2014 April 23

From Wikipedia, the free encyclopedia
Computing desk
< April 22 << Mar | April | May >> April 24 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 23

[edit]

OpenSSL

[edit]

Apropos of this "heartbleed" thing, how can it be that something so critical to the operation of modern society can be left to a group of "11 members, of which 10 are volunteers, with only one full-time employee", with development of critical functionality apparently left in the hands of some random developer, with obviously no proper checking whatsoever? How is it that major companies tolerate using a system developed in such a half-arsed and amateurish way? 86.128.2.169 (talk) 02:34, 23 April 2014 (UTC):To be fair, many people use alternative software products to implement secure transport (SSL and its ilk) that were not affected by the CVE-2014-0160 vulnerability. Many commercial operating systems do not use OpenSSL, and those software companies hire their own software teams to implement or integrate alternative versions of the SSL protocol. This obviously does not mean that such software is free of defects; but it's quite a mischaracterization to suggest that the volunteers at the OpenSSL team are the sole provider of this type of service. They are simply the most popular provider of a free software solution. Consider Dropbear, which is also free and open-source software.[reply]

OpenSSL is distributed under a license that expressly disclaims liability and states that the software is "as-is" with no guarantee of fitness for any purpose. This isn't just legalese nonsense - it means that any person or company who chooses to use OpenSSL is accepting the fact that its creators are not paid to provide support or to offer liability.
One advantage of commercial software - whether it is free software or not - is that a business arrangement can be made to assign liability. That means that a client can hold the software-provider accountable - and can bill them for financial damages - if the software has a defect.
Commercial software providers who accept such terms would be unwise if they started incorporating software that they can't be accountable for. Software companies hire experts, which categorically means there are more than a small team of volunteers who look over such projects.
As a perfect example: my credit union (in which I have obvious financial stake) performed a full internal audit in the wake of the Heartbleed bug; and they sent me a fantastic summary report replete with technical details. Their computer experts verified that OpenSSL was not ever used on any of our servers; and therefore our financial data was never jeopardized by the CVE-2014-0160 vulnerability. But here's the juice - as client, I don't need to care if my credit union screwed up, or if they used open-source software, or if an open-source-programmer screwed up... because if any of those screw-ups happened, then the finanical institution is liable, and I am insured (it is a federally accredited, NCUA-insured institution). If their misfeasance with software caused my money to get lost, I can legally get my money back.
But, as a stockholder in the union, though, I definitely care that they've done the right thing and taken precautions! I prefer that the credit union follows best-practices, provides transparency and accountability, and minimizes their liability, because that means that our group isn't losing money in the aggregate.
So, in this case, we have accountability at so many layers, from the financial transactions to the software vendor who provides the server infrastructure, all the way to the individual retail-banking-style members. We pool our resources to make sure we have the right technical and legal experts to protect our communal assets. Our credit union doesn't depend on ten or twelve open-source-software volunteers to watch our backs for us. I emphatically hope that everyone else's financial institutions are as diligent and transparent!
Long story short - whoever told you that "the whole world" is banking on ten or twelve volunteer open-source programmers has completely misled you.
Nimur (talk) 03:32, 23 April 2014 (UTC)[reply]
Most of the software that's powering the Internet was written (and donated for free) by unpaid programmers. Most web servers run Linux with Apache doing the web serving, MySQL doing database handling, PHP doing page generation (this is such a common combination, we use the "LAMP" acronym as a shorthand way to say it). A good chunk of people use Firefox and Chrome to view the resulting content. The software you're using right now to run Wikipedia ("MediaWiki") is entirely OpenSourced and written by volunteers.
Linux alone contains around 16 million lines of code - and it's estimated that for a commercial organization to rewrite it would cost them around $1.3 billion dollars. It is absolutely certain that there are horrible security breaches to be found there - and it's more than likely that new breaches are being created at about the rate that old ones are fixed!
But the sad fact is that software written by giant corporations is rarely much better. Recall the SPECTACULAR cost of the Y2K problem - scarcely any OpenSourced software fell vulnerable to that. Y2K cost the world around $300 billion ($400 billion at todays' money value) to clean up...heartbleed is scarcely a blip compared to that. The recent Target security breach caused 40 million credit cards to be compromised...and we're talking names, numbers, expiration dates, home addresses and the CVV codes - bad publicity lost Target 3% of their business for over a month - which is hundreds of millions of dollars in losses - other similar breaches in entirely commercial software have caused hundreds of millions of credit cards to be compromised! All of these dwarf OpenSSL's problems.
It's truly unfair to point to the authors of OpenSSL when the problem is more or less universal. Any piece of software more than a few thousand lines long is more or less certain to have bugs of some kind...many of which are remotely exploitable. The problem with commercial software is that the owner of the code may seek to cover up the problems and could take a very long time to come up with a solution. With OpenSSL, the bug was fixed within hours of being reported and the patch was available for people to download within less than 12 hours. The reason for that speed is that when the source code is available for anyone to look at and update, fixes get done rapidly and the need to upload the fixes is widely broadcast.
Consider this breach. The companies affected by the problem reported a problem with software that's used for around 40% of all VISA and MasterCard payments in August 2008 - it wasn't until they called in the US Secret Service and two companies who specialize in network security that they found the problem in mid-January 2009. In terms of potential damage, that's horrific.
Heartbleed has hit the news mostly because it's relatively comprehensible to the layman (Here is a cartoon that does a pretty good job of it: http://xkcd.com/1354 ) and it seems so obvious. But that's just 20/20 hindsight. There are millions of bugs out there just waiting for someone to exploit them - most of them would require nothing more than a one-line fix - and most could be found if only someone had the time, money and enthusiasm to seek them out. I very much doubt that any sizable piece of software that runs the web infrastructure is perfectly secure for that reason.
As security holes go, heartbleed is only patchily useful. When you write the exploit code (which is really very easy), all you get is a big pile of utterly random binary garbage back - you still have to recognize that some sequence of bytes is a security code or a credit card number or a password rather than (say) the partial contents of an image file containing a photo of the company's cat. That's decidedly not-trivial. Other bugs allow you more direct access into the target machine and are likely to be of more interest to serious bad guys.
SteveBaker (talk) 17:28, 23 April 2014 (UTC)[reply]
It's not hard to find credit card numbers or server private keys in data extracted via heartbleed. Both searches can be automated, and tools are in the wild now allowing script kiddies to do it.
Some buffer-overrun bugs are subtle. The check might be invalidated by integer overflow, or by a later change to seemingly unrelated code. Heartbleed was not subtle. It was a bare memcpy in brand-new code whose length was simply not checked at all against the size of the source buffer. If you're doing a security audit of C code that contains a memcpy, this is the first thing you look for (well, the second thing, after the destination-size check). The people who allowed this code into OpenSSL without checking it for buffer overruns shouldn't be responsible for security-critical code. This is "20/20 hindsight" in the same sense that the sudden bankruptcy of a financial institution makes you realize in hindsight that the people running it were never competent. -- BenRG (talk) 19:00, 23 April 2014 (UTC)[reply]
SteveBaker, most of Linux wasn't written by unpaid volunteers. Take a long, hard, un-propagandized look at the list of people who have commit access to the kernel. Take a look at how many of those people are on the payrolls at Intel, or IBM's Linux Technology Center, or are professors at universities who receive government grants to perform research and development on computer systems. Most of the hardware drivers available for linux, and built into linux, are produced by salaried employees at hardware vendors. A handful of projects actually are run by real volunteers - but "most of linux" is free software because certain companies believe that free software is good for business.
And even MediaWiki, which is now open-source free software - is now most actively developed by people who are salaried employees of the Wikimedia Foundation. Nimur (talk) 20:58, 23 April 2014 (UTC)[reply]
There is definite misfeasance at play here. It's irresponsible for developers to release anything to production without first having it tested by a separate group of QA testers. Developers are not QA testers anymore than they are UX experts. There's a serious problem in our industry. A Quest For Knowledge (talk) 23:59, 23 April 2014 (UTC)[reply]
The OP could equally have asked: How can a billion dollar company with thousands of employees, convince so many millions of people, to pay out good money time after time, for software that is defective by design. So globally costing its customers millions of dollars each month to mitigate its inherent vulnerabilities, only to find that then, they are then forced over to a new version and have to start all over again? Microsoft: Let’s Talk About Heartbleed® (Reported by Our ‘Former’ Security Chief) While the World Migrates From XP to GNU/Linux --Aspro (talk) 01:07, 24 April 2014 (UTC)[reply]
Fact check on that article, Aspro: the article you linked claims that Apple products use OpenSSL. Apple's official Cryptographic Services Guide on its developer webpage states: "although OpenSSL is commonly used in the open source community, it does not provide a stable API from version to version. For this reason, the programmatic interface to OpenSSL is deprecated in OS X and is not provided in iOS." If you consult Apple's opensource webpage, the last Apple operating system that shipped OpenSSL went to market four years ago. Modern operating systems (which are available at no charge) use a different implementation for cryptographic services. More recent systems use something called a shim layer - the source-code is available for inspection here: osslshim from Apple's Open Source page. That shim can help software that expected the OpenSSL API work, while using a different implementation underneath - most of which is also open-source software, available for inspection or hobby-work. However, the official developer page suggests that industrial-strength software written for Apple platforms ought to instead write directly to the cryptographic services API, which is more robust and platform-portable to iOS.
Microsoft's operating system uses SChannel, not OpenSSL. The official statement from Microsoft to its clients and developers, Information about HeartBleed and IIS, provides additional technical information about why Microsoft products are not affected by (CVE-2014-0160). Obviously this does not mean that these commercial softwares are totally free of defects, but it solidly refutes the claim that either infrastructure is vulnerable to the bug people call "Heartbleed."
While I'm nitpicking, the TechRights article primarily cites itself as its reference for a variety of other claims. I don't consider it a very reliable article. The point is, your link claims that the vulnerability affects Microsoft and Apple products - but that is an unfounded claim: neither NIST's information page nor MITRE's information page corroborate that. Your article claims that Microsoft intentionally placed backdoors in its operating systems, but cites its own publication. It makes several other inflammatory claims that are equally unfounded. I do not believe it meets the standards we set for reliable sources. Nimur (talk) 21:12, 24 April 2014 (UTC)[reply]
What exactly are you attempting to get across: “the last Apple operating system that shipped OpenSSL went to market four years ago.” Are you saying it is my fault (and many hundreds of thousands others) that my apple is more than four years old? Also, Apple has just fix their non OpenSSL; Apple Fixes Serious SSL Issue in OSX and iOS. No logo and web site created for that was there? Microsoft's Azure has lots of openSSL installations on their servers that they have encouraged their clients to instal. Next: Does software wear out after four years? Power-stations and other capital installations are still running vintage PDP 11's with original software -35 years on. There are so many pots calling the kettle black here, that is it surprising that the average Joe can't differentiate fact from FUD. Which is the very confusion that FUD sets out to achieve, in the minds of the average Jane and John Doe. Think that maybe the Jane geeks see through the FUD but are content to let the little boys argue and fight about who has the better toys. There is only one cure for this: I see, I do, I understand.--Aspro (talk) 22:19, 24 April 2014 (UTC)[reply]
Apple releases Heartbleed fix for AirPort Base Stations A Quest For Knowledge (talk) 22:07, 24 April 2014 (UTC)[reply]
Thank you for the information and the news articles. I did some homework, because I was the original nit-picker, and I always like to re-verify my previous statements for accuracy. Apple's official article on this topic is: Knowledge Base HT6203, the official release, confirms that a firmware update for AirPort Base Station to address CVE-2014-0160 was made available for products that Apple shipped in 2013. The firmware update mentions CVE-2014-0160. But it does not, however, explicitly confirm whether OpenSSL was even used on the product, let alone whether it was actually affected! Let me emphasize that as an unprivileged user without access to all the firmware source-code for AirPort Extreme, I can only speculate! But, there is strong evidence to suggest:
  • ...if OpenSSL were present on that device, it seems plausible that it would have been 0.9.8, a version of software that is so old that it pre-dates the portion of code in which "Heartbleed" existed... in other words, OpenSSL-0.9.8 did not actually contain the CVE-2014-0160 vulnerability.... (And let me re-state that I don't know what version, if any, ever actually shipped on those WiFi products. I just read the fine print in the EULA. "Certain components of the Apple Software, and third party open source programs included with the Apple Software, have been or may be made available by Apple on its Open Source web site... You may modify or replace only these Open-Sourced Components; provided that ... you otherwise comply with the terms of this License and any applicable licensing terms governing use of the Open-Sourced Components." I bet you there aren't too many programmers who actually do ! (But, I've also suspected that most people don't build their Linux from source, or compile their own operating system for their telephone... so why would most ordinary people waste their time munging their WiFi router?)
  • ...releasing a firmware update, without releasing source, we can only speculate what changes (if any) were made - all we know is that the update "addressed" the vulnerability! The release-note states that the feature was not even used in the default configuration that ships with the product: "Only AirPort Extreme and AirPort Time Capsule base stations with 802.11ac are affected, and only if they have Back to My Mac or Send Diagnostics enabled." I still scrambled to verify all my units to determine if an update is required. And I would be absolutely fascinated to see a write-up from anybody who actually attemtps to exploit an AirPort Extreme running firmware 7.7.1 or 7.7.2: what exactly do the attackers have practical ability to extract, in light of the qualified statement "information from process memory"?
I'm actually surprised that nobody brought up CVE-2014-1266 - a recently-fixed, totally unrelated SSL bug in a totally unrelated library, that affected numerous products in their default configuration.
Nimur (talk) 19:39, 26 April 2014 (UTC)[reply]
  • So at the end of the day, the biggest threat to the internet that ever existed, turns out to be just another little bug that would have passed unnoticed and fixed, if it hadn’t coincided with XP's end of life. And the security company that broke the embargo are patting the own backs for all the publicity they have gained at the expense of security. Let us prepare then (no, not you specifically but the hoi polloi), for next time Apple finds, it has a maggot or microsoft is found to have left its Gates open. Lets create a website of that vulnerability with a striking, snappy hartbleed type name and logo and invent some FUD of our own. --Aspro (talk) 15:46, 27 April 2014 (UTC)[reply]
Where did I say that xp suffered from the heartbeat issue. The FUD was to warn organizations off from virtualizing xp on Linux now that xp was no longer supported as a native OS.--Aspro (talk) 13:21, 28 April 2014 (UTC)[reply]

Possible software conflicts?

[edit]

Can installing an add-on JDK or JRE on Windows 8 cause Microsoft Flight Simulator X to become non-operational? If so, are there any JDK's or JRE's out there that are known NOT to have this effect? Thanks in advance! 24.5.122.13 (talk) 04:53, 23 April 2014 (UTC)[reply]

I've never had a problem having Java installed alongside that game. Palmtree5551 (talk) 16:49, 23 April 2014 (UTC)[reply]
In my case, shortly after I installed Java in order to activate a chemical drawing program I needed, FSX crashed so bad that it wouldn't even uninstall or reinstall properly, much less run -- I had to nuke and pave my system to get this resolved. But I don't know if this was because of Java, or for some other reason. 24.5.122.13 (talk) 22:27, 23 April 2014 (UTC)[reply]

Looking for recommendations for a proxy server that runs on Windows

[edit]

I want to run a proxy server on my LAN for the following two reasons:

  1. To block ads. If I can block ads at the LAN level, this saves me the trouble of installing multiple ad-blocking apps across all my browsers and computers. Also, if I block ads at the LAN level, this should also block ads on my mobile devices such as my iPad and my Chromebook.
  2. To monitor all network traffic. After reading that 40% of iOS and 41% of Android banking apps accept fake SSL certificates, I want to know which of my mobile apps are using SSL and which ones aren't.

I am looking for something that runs on Windows. And since I have no experience with proxy servers, something that has a good UI. Does anyone have any recommendations? I have never setup a proxy server so I'm not sure what's good or what's commonly used. A Quest For Knowledge (talk) 22:45, 23 April 2014 (UTC)[reply]

FWIW, the Microsoft solution would be Microsoft Forefront Threat Management Gateway, but I doubt that would be practical or necessary for what you want to do. Vespine (talk) 22:53, 23 April 2014 (UTC)[reply]
I use Privoxy, but it doesn't have a configuration GUI as far as I know; you set it up by editing text files, which is not too difficult.
It sounds like you want to avoid apps that use SSL. That's probably a bad idea because whatever they use instead is likely to be worse than SSL, even SSL without certificate validation. What you really want is a proxy that will (optionally) try to mount a MITM attack on all SSL connections, so you can figure out which apps detect the attack. Privoxy doesn't do that, but this thread mentions a bunch of proxies that do. I haven't used any of them, though. -- BenRG (talk) 04:48, 24 April 2014 (UTC)[reply]
MITM linked for your convenience ;)- ¡Ouch! (hurt me / more pain) 08:17, 25 April 2014 (UTC)[reply]
Taking a step back, I'm starting to think the solution is to get a second router and attach a computer in between to monitor network traffic. That should give me everything I want. A Quest For Knowledge (talk) 16:42, 28 April 2014 (UTC)[reply]