Wikipedia:Reference desk/Archives/Computing/2017 May 24

From Wikipedia, the free encyclopedia
Computing desk
< May 23 << Apr | May | Jun >> May 25 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 24[edit]

Signed zeroes[edit]

In most contexts, zero holds a unique place: it's neither negative nor positive, because it sits in the very middle, and the numerical values of all other numbers represent their distance from zero in a number line context. In computing contexts with signed zeroes, does anything occupy the median, neither-negative-nor-positive place occupied by zero in other contexts? Nyttend (talk) 02:20, 24 May 2017 (UTC)[reply]

The standard IEEE 754 number system has only the two signed zeros, and I'm not aware of any other number system that has three zeros (Positive, negative, and neutral.), if that's what you're asking.
They both occupy the center of the number-line, of course. And they equal each other.
ApLundell (talk) 03:48, 24 May 2017 (UTC)[reply]

Do you mean like a datum?--Shantavira|feed me 08:02, 24 May 2017 (UTC)[reply]

  • Signed zeroes aren't numbers, they're bit patterns that represent a number. There are potentially multiple bit patterns that can represent the same number. Mathematically this is all just the same one number. This is a nuisance - it makes comparisons more complicated.
A similar problem exists in truthiness (read the talk: - one editor's ego keeps removing it from the page), where one value is defined as "false" and all other values (very many of them, up to the size of the words used) are treated as "true". This too can be a tricky situation to work with for the unwary. Comparing values for equality to a true value (rather than inequality to the singleton false) will fail with many false negatives. Andy Dingley (talk) 08:25, 24 May 2017 (UTC)[reply]
most processors have a "compare with zero" or similar instruction ("or" a register with itself and set flags, or even a flag bit which is all bits in a register or'ed with one another at all times etc). why is checking for nonzero a problem? Asmrulz (talk) 14:20, 24 May 2017 (UTC)[reply]
But the point here is that some representations (such as setting the sign bit on a zero magnitude) mean that the pattern is still a valid representation of a mathematical zero (and isn't forbidden by the representation standard), even though its bit pattern is no longer a binary zero pattern, as a binary processor ALU would see it (a numeric coprocessor ought to understand it though).
In practice, if just the exponent part of IEEE 754 is zero (whatever the mantissa bits), then the overall value is so close to zero (a magnitude comparable to 1E-38 at most) that it's probably a mistake to not evaluate it as being equal to zero. But of course, no good programmer compares floats to zero, do they?
Have a play with this: https://www.h-schmidt.net/FloatConverter/IEEE754.html
Andy Dingley (talk) 17:25, 24 May 2017 (UTC)[reply]
I thought this was about truth values. Nevermind, I re-read your paragraph and see what you mean - there's only one bit pattern for "false" but many for "true" (which I'd say is not a problem either, at least not in C, where the idiomatic way of checking two variables for truthiness is to write a && b, which checks them for nonzero rather than equality to "true.") As to good programmers, I imagine they don't compare floats to any constant, but constant±epsilon Asmrulz (talk) 12:51, 25 May 2017 (UTC)[reply]
Good programmers know that blindly replacing "unreliable floating point comparison" (which is a myth: 0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1!=1, but that's because of rounding in the addition, not because of anything to do with the comparison) with toleranced comparison produces a non-transitive "equality" which, not being an equivalence relation, invalidates many sensible uses of equality (e.g., in conjunction with hashing). Which form of comparison to use thus depends on understanding the problem, not on knowing a "good programmer" rule. --Tardis (talk) 13:48, 26 May 2017 (UTC)[reply]
those are very good programmers. Asmrulz (talk) 21:39, 26 May 2017 (UTC)[reply]

Oh boy - this is the second time this week I've had to deal with somebody storing booleans in floating-point data types... For better or for worse, programmers of most computer software can interpret bit patterns in any way they like. Most of the time, they choose to use standard bit representations, because it's easier and more consistent and it's harder to mess up. But - if you needed some kind of extra special tagging system for a particular application - beyond the mathematical conventions used in standard representations like IEEE-754, then could pack additional status into a compound data type (a "struct," in many languages). To do so in a way that is simultaneously useful, mathematically rigorous, and extensible - WLOG, as the mathematicians say - would be very difficult. But you could! Consider:

int main(int argc, char** argv)
{
  typedef enum {
      positive,
      negative, 
      exactly_zero
    } sign_convention;

  typedef struct 
  {
    double value;
    sign_convention flag;
  } number;

  number x;
  x.value = 2.718;
  x.flag = exactly_zero;

  return 0;
}
/* Note to novice programmers: don't do this. */
/* Note to advanced programmers: don't do this either. */

This is a legal C program and it compiles without any problem on my computers. How you wish to interpret a "number" data type whose sign convention flag differs from its value is at your discretion. But before you say it is wrong - consider that in modern mathematics, we have the power of definition. If, for the purposes of my application, I define the positive-ness or negative-ness of my data type to depend only and exclusively on my sign_convention flag, then I have the freedom to use this in whatever fashion I see fit. If I had decided to be particularly obtuse, I could have used type punning to pack the status-flag inside the value, provided that I used the value with extraordinary caution. In response to the original question's implicit ask: why would we ever do this? Why not just use positive and negative floats the way our forebears intended? My mind races toward interesting use of the negative sign in various elements of graph theory and topology and oriented surfaces in arbitrary dimension; and toward the esoteric basis-sets that extend the real plane into higher dimensions (like the quaternion); and so on. These are very real and very common data types. It is a near-certainty that the computer you are using right now includes a graphics-processing arithmetic unit that is capable of using, and interpreting, such esoteric sign conventions - at least in its internal data representations. As our great computer scientists frequently remind us, we can and should exercise our brains, and reevaluate the fundamental mathematical assumptions that exist in our standard machine representations of numeric theory. Messing around with the way your machine automates its sign convention might shake out a fundamental flaw in your application-software. It never hurts to inject a little bit of paranoia into your floating point code. Nimur (talk) 14:24, 24 May 2017 (UTC)[reply]

  • Oh boy - this is the second time this week I've had to deal wih somebody storing booleans in floating-point data types <reads this quote and just nopes the hell away from this thread> ᛗᛁᛟᛚᚾᛁᚱPants Tell me all about it. 12:54, 25 May 2017 (UTC)[reply]
As to why have a signed zero, the simple reason is it worked better than the alternative. Originally there was going to be a floating point mode setting and one setting would say that 1/0 was infinity with no distinction between positive or negative infinity, and the other was the current standard that 1/+0 = +infinity and 1/-0 = -infinity. The first 'hyperbolic' case just wasn't useful enough. The reason one would want zero and infinity is that floating point numbers are approximations. The idea of actual absolute zero just is not so useful in that context - it is something that is more associated with integers. If we really wanted to do the job right we would use interval arithmetic or something along those lines instead of floating point numbers. Dmcq (talk) 14:20, 25 May 2017 (UTC)[reply]

Managing e-books on ELectronics[edit]

I was searching for, "HOW TO MANAGE E-BOOKS ON ELECTRONICS" on this website and others for days now. I really need the answer so I will know how to do this.

ENID BLITON (talk) 23:44, 24 May 2017 (UTC) WEDNESDAY 24TH MAY,2017[reply]

What's the problem you are having ? For example, if you have too many e-books in one folder to find what you want, then sub-folders, by topic or author, for example, might be the solution, assuming your device allows you to create such folders and move files between them. StuRat (talk) 01:00, 25 May 2017 (UTC)[reply]
You could try some sort of ebook management software. A good option is Calibre [1]. It supports a variety of file formats, and can transfer to and from ebook reader devices. --Fuaran buidhe (talk) 22:44, 27 May 2017 (UTC)[reply]