Jump to content

Wikipedia:Reference desk/Archives/Computing/2015 December 9

From Wikipedia, the free encyclopedia
Computing desk
< December 8 << Nov | December | Jan >> Current desk >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 9

[edit]

android 5 standby low battery notifier

[edit]

hey! Can you point me to the software for android that can notify you (beeping or by other sound means) when your battery plunges under a certain percentage level, there are a few on goole play, but none of those that I've tried do it while the phone is in sleep(standby) only when you are active...

thanks — Preceding unsigned comment added by 37.200.78.45 (talk) 05:27, 9 December 2015 (UTC)[reply]

Auto-copy the content of a sub-document into the master-document? (MS Office)

[edit]

I have created an MS Word master-document that contains a piece of content imported from a sub-document, but I have a problem --- Each time I close and reopen the document the text no longer appear there and a blue link to the sub-document's path appears instead so to make it appear again I need to go to View > Outline, and there click "Expand subdocument". I need to do this manually each time anew.

Maybe there is a way to force coping the text from the sub-document or at least make it more permanent, so it won't be collapsed at all? It is extremely important for me for cases when I send this document to others --- I can't expect anyone receiving it to do these steps... It's madness... Ben-Yeudith (talk) 09:04, 9 December 2015 (UTC)[reply]

Is Google's D-Wave 2X Quantum Computer, as it is called, really a quantum computer with qubits inside?

[edit]

131.131.64.210 (talk) 14:23, 9 December 2015 (UTC)[reply]

The D-Wave does have qubits. However, it is less a computer and more a specialized calculation device. In other words, it is not what we call a "general" computer or a "universal" computer. It handles quantum annealing problems. If you want to work on those, it will do it very quickly. If, instead, you want to calculate correlations between two columns of data, it won't help (unless you can turn that into an optimization problem). The goal is to make a quantum universal computer at sometime in the future. 209.149.113.52 (talk) 14:33, 9 December 2015 (UTC)[reply]
The topic of defining a "qubit" was also discussed on the reference desk in May 2014. I think you will find that "qubit" is so weakly defined that nearly anything can be called a qubit. For example, a violin string may either be silent or vibrating - it can exist in two states (a ground state and an excited state), approximating two energy levels of a non-ideal oscillator system. Because a violin string is pretty strongly resonant, the preferred oscillation frequencies are quantized. Because the individual strings can resonate and supply energy to each other, these four "qubits" can be called "coupled" and some equation can be written to describe how weakly or strongly they effect state-changes in each other. These facts satisfy the (unreferenced) definitions of a qubit as explained in our encyclopedia article. So: is a violin a 4-qubit system? What external circuitry would you need to attach to it in order to call it a "4-qubit computer"? How functional would that "external circuitry" remain if you then proceeded to remove the violin from the computing machine?
The trouble with D-Wave is that it is not obvious how much of the "computer" is inside their refrigerated physics experiment that they call the "quantum" part, and how much of the computation is actually performed by the conventional computer that is attached to that device. I would put forward that their "device" is more similar to a thermal noise generator - similar to the devices sometimes used to generate entropy for secure computing applications. As such, the device might be useful for some experiments - like simulated annealing or monte carlo method computer algorithms. However, I'm not convinced that it's particularly novel - certainly not worth its cost.
Nimur (talk) 16:26, 9 December 2015 (UTC)[reply]
A violin supercomputer definitely sounds like something out of a Douglas Adams novel :) 131.131.64.210 (talk) 17:13, 9 December 2015 (UTC) [reply]
D-Wave doesn't hide their technology. They are using niobium qubits. They make it very clear that it is not a general purpose computer. It is lacking logic gates. It only does optimization of multivariable equations. Further, it doesn't have error correction. They claim that because it works in low energy states, they don't need it. I don't think the issue is "Do they have qubits?" It is "How can we step from this specialized device to a general computing device?" 209.149.113.52 (talk) 17:15, 9 December 2015 (UTC)[reply]
Well, do their sales team ever have an answer for you! Your algorithm, written in MATLAB or Mathematica or Python or FORTRAN (... or just, whatever) simply runs through their software stack, which handles all the hard parts by way of a Quantum Meta Machine. This software - especially the MATLAB interface - definitely exists* - or at least is in a superposition of existing and non-existing. (*Under Development)... since 2009).
Here is their software architecture overview. When I watched their tutorial video on Map Coloring, I was intrigued at first... but after watching to the end, I can only conclude that this product is aimed at people who have an absurd misunderstanding of computer science.
For example, they describe the quantum annealing solution as an alternative to "random solution-guessing." This is a straw man fallacy! It is very true that randomly guessing solutions to the map coloring of the United States would be inefficient. But this is not how we design actual graph-coloring algorithms! If I wanted to write a for loop to iterate over a variable i from one to six... I would not roll a dice six times and hope that statistically I get an ordered sequence! If I did use this method, a quantum computer would beat my algorithm... in exactly the same way that D-Wave correctly found a "needle in haystack" solution. A "randomized" effort to generate a for-loop for i=1 to i=6 would have a one-in-46,656 chance of being right - which is statistically harder than finding a random four-color solution for the map of Canada by random-guesses! We could replace "i" with six qubits; construct, and then solve a constraint-equation so that the objective function is minimized only when bit bn is larger than bit bn-1; then program that into the D-Wave and let that solution anneal... this method would, in fact, yield better results than randomly guessing i six times in a row. But it's not better than a real, actual conventional for-loop.
The point here is that the quantum computer is "outperforming" an absurdly-designed, altogether implausible version of a "conventional" algorithm that we would never actually use.
And this is to say nothing of the conventional optimization algorithm that must be performed to set up the "qubit" connectivity. This optimization problem is a complex one - and it needs to be performed for each qubit, as described in D-Wave's marketing literature. This is the same logical fallacy that I described in 2013 when we discussed why a "spaghetti sort" is not actually, truthfully scalable. If you casually neglect every part of the computation that is difficult to scale, the algorithm sure does appear to be infinitely scalable in constant time! D-Wave even has to serialize its quantum operations - so, that makes its runtime O(n) - linear at best (and probably much much worse for large, complex, or densely-interconnected graphs)! This is exactly a refutation of the supposed benefits of quantum computing. The dream they've been selling is that "super-large" problems could be solved with magic massively-parallel qubit entanglement - except they forgot to emphasize the annoying part where you have to split the problem into n tiny sub-problems and iterate them sequentially. Video, at 10m50s. They leave that annoying part out of the sales pitch, but it snuck into the technical tutorials!
Have you noticed that D-Wave customers have been very reluctant to provide wall-clock times on any of their benchmarks?
I've seen my share of "weird" computers, and this one is way more snake-oil-y than most.
Nimur (talk) 03:55, 10 December 2015 (UTC)[reply]
Nimur, do you know anything about quantum computing? Why do you think you're qualified to talk about this? Most of the text in this thread is by you, but the only useful responses were by 209.149.113.52, who clearly does know what he/she is talking about.
When you say the runtime is linear at best, you seem to be confusing the size of the output with the size of the search space. If the solution is n bits long, there are 2n possible solutions, and a machine that could find the global optimum in O(n) time, or any low-degree polynomial in n, would revolutionize the world. So needing O(n) time to touch all qubits is not a problem.
The theory behind what they're trying to do is described in the article quantum annealing, which is actually pretty readable. It's similar to classical simulated annealing, but there are known problems for which it's asymptotically faster; in at least one case the speedup is quadratic (the same as the speedup of Grover's algorithm over classical brute-force search). Unfortunately, it's not known to be asymptotically faster than classical Monte Carlo simulation of quantum annealing. So this doesn't seem to be a case of a quantum computation offering an asymptotic speedup over the best known classical algorithm. At best it could be faster the way GPUs are faster than CPUs for some tasks. It's expensive now but so were GPUs at one time.
All of this is explained the the preprint that seems to have spawned the current media cycle. Scott Aaronson also has a blog post about it. -- BenRG (talk) 08:11, 10 December 2015 (UTC)[reply]
BenRG, it seems that you are directly questioning my "qualifications" as if you'd like to establish my credentials before I contribute to the encyclopedia. This is not how Wikipedia works. If I am wrong or mistaken, please correct me where I err. That standard applies, irrespective of my credentials or experience. I'm tempted to return the question tu quoque, except that I don't really judge your correctness by your academic degrees. I have met many well-credentialed people who were also wrong.
For example, you are wrong when you write "needing O(n) time to touch all qubits is not a problem." It seems that you failed to notice that for each qubit, an optimization problem must be computed: so that's not O(n) over n qubits - it's n iterations of an optimization problem. This is as-documented in the video tutorial I linked. This wrongness is independent of whether you are "qualified" to write on this topic.
By all means, buy yourself a D-Wave - rather, attach yourself to an organization that already has one - and experience the machine first-hand. I am very curious to see how you would respond after using one.
Nimur (talk) 14:59, 10 December 2015 (UTC)[reply]
The issue here is defining and measuring quantum speedup. So, we are currently in a semantic argument. The data doesn't change. Quantum annealing devices can be faster than conventional silicon devices in solving specific problems. However, if I throw more and more silicon at the problem, the silicon solution will eventually be faster than the quantum solution. 209.149.113.52 (talk) 15:47, 10 December 2015 (UTC)[reply]

What user-experience do non-Linux, non-Windows, non-Mac OSs have?

[edit]

Does any "exotic" OS give a good user-experience? I mean Plan 9 or OpenBSD and any one you could think about. Are they any good from the user-experience perspective? --Scicurious (talk) 16:53, 9 December 2015 (UTC)[reply]

"Good" is a matter of opinion. We don't do matters of opinion here. --76.69.45.64 (talk) 16:56, 9 December 2015 (UTC)[reply]
I dispute both of your claims. Many aspects of quality can be measured. And there are lots of people around here posting their opinion. --Denidi (talk) 18:40, 9 December 2015 (UTC)[reply]
Both Plan 9 and OpenBSD are POSIX-compatible. Therefore, they run POSIX applications, such as KDE and GNOME. The user (assuming you are referring to the meat-filled skin sack sitting in front of the computer screen) commonly uses a GUI interface. For POSIX, KDE and GNOME are the the most popular. In the end, the usability of any POSIX-compatible operating system is going to be roughly the same as any other. It is not worthwhile to make a distinction for "users". 209.149.113.52 (talk) 17:08, 9 December 2015 (UTC)[reply]
And what makes so many more users choose Linux, and not another POSIX-compabitle OS, as their distribution? What tips the balance? --Scicurious (talk) 18:09, 9 December 2015 (UTC)[reply]
Freedom and availability have made Linux popular. 209.149.113.52 (talk) 18:59, 9 December 2015 (UTC)[reply]
Software range, software availability and software price (ZERO DOLLARS). By software I mean gnuplot and other applications. 175.45.116.66 (talk) 01:25, 10 December 2015 (UTC)[reply]
This deserves a more detailed answer. A lot of software, including your example of Gnuplot, is written for POSIX, which means it will run unmodified on any POSIX-compatible system. It is true that some big proprietary programs like Oracle Database often support Linux but not other Unix family members (Oracle runs on some of the big proprietary Unixes, but not BSD). Apart from said proprietary programs, I would say the main factors behind Linux's popularity are its hardware support and the bandwagon effect. And the hardware support is itself largely a result of the bandwagon effect; if anyone is going to write device drivers for platforms other than Windows and OS X, they're probably going to pick Linux as the other platform, because it has the next largest user share. To understand why, it's useful to understand the history; Linux rapidly attracted support in the 1990s as the free Unix-like for commodity PC hardware while BSD was under the cloud of copyright infringement lawsuits, and also various community conflicts and forks. Linux avoided these potential copyright issues, being written from scratch, and avoided any large forks splitting the community. See History of Unix, History of free software, Unix wars, and I highly recommend The Art of Unix Programming for a more detailed history. --71.119.131.184 (talk) 02:53, 10 December 2015 (UTC)[reply]
There are also lots of special purpose operating systems for applications like avionics and industrial controllers. Many of these systems present a user interface or otherwise provide "user experience." Are you interested in such single-purpose computers?
For example, here is AVWeb's review of the user interface on the GNS530, a navigation computer designed for small airplanes. Garmin doesn't publicize the implementation of the operating system on their higher-end products (for all we know, it could be Linux-like, although that'd be hard to get certified). However, here's an article from Mentor stating that the Nucleus RTOS would be used on the Garmin CNX80, a different product. One might surmise that all of Garmin's products are built on top of similar RTOSes that are quite different from your personal computer software.
Nimur (talk) 02:05, 10 December 2015 (UTC)[reply]

December 31, 4700

[edit]

I see December 31, 4700 used as a "NULL" value for dates in databases rather often. However, it isn't a numeric limitation that I know of (that takes place in 2038, if I remember correctly). So, I just googled and I didn't find any reason to pick that particular year. Does anyone here know the reason? 209.149.113.52 (talk) 20:16, 9 December 2015 (UTC)[reply]

December 31, 4712 AD is the highest value for a DATE field in Oracle 7: see this table. Later versions of Oracle go up to December 31, 9999. Tevildo (talk) 22:07, 9 December 2015 (UTC)[reply]
In oracle, DATE range used to be Jan. 1, 4712 B.C.E. to Dec. 31, 4712 C.E. and this was merely a number stored with hour/minute/second precision. So that range, if I have calculated correctly, amounts to about 300 billion seconds. Maybe you can translate this to a 32 bit limit or something. Nevertheless it was fixed at 7 bytes per row and NUMBER itself was precision 38. Sandman1142 (talk) 11:12, 11 December 2015 (UTC)[reply]

Are certificates installed in Firefox safe?

[edit]

In the certificates settings I have Your Certificates, People, Servers, Authorities, Others. 1 in Your..., none in People or Servers, a full list of Authorities, and in Others I have yahoo, google, skype, addons of firefox.

Is it a security risk to install certificates here? How can I know that the certs are all OK? If a user installs a certificate from an attacker, what could the attacker do?--Denidi (talk) 23:31, 9 December 2015 (UTC)[reply]

We need to carefully distinguish between regular certificates and root certificates. A root certificate can be used to authorize a different certificate. A malicious attacker who successfully installs a root certificate could proceed to successfully phish you: your web browser would trust any website that the attacker wanted you to trust.
You can add new certficate authority (i.e. a new root certificate) in Firefox. This is a security risk - but only if you cannot trust the certificate you are adding.
The other category - "regular" certificates - are in some sense less dangerous, because they don't cascade the risk down an entire chain of trust. However, if (somehow) one single certificate is compromised by an attacker, then you will trust that one attacker on that one website.
So, what is the "risk"? Well, when your browser trusts an attacker's certificate - or, trusts a certificate signed by a compromised root authority - then your browser willingly will send encrypted data to that attacker. What data? Any data you send - like your passwords, your bank credentials, or anything else you enter on a web form when you access that specific site. The gotcha is that some web traffic also happens invisibly (without displaying any UI to you). In particular, if you have configured Firefox to save credentials, or if a website relays information to some other website in the background, or if a browser extension or software bug causes sensitive data to be sent... then the attacker could see that data.
Certificates exist in your browser as proof that you trust something. Every time you trust anything, you are exposing yourself to some risk.
Nimur (talk) 02:16, 10 December 2015 (UTC)[reply]