Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2007 August 29

From Wikipedia, the free encyclopedia
Mathematics desk
< August 28 << Jul | August | Sep >> August 30 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 29

[edit]

Arithmetic Precision, Significant Figures - Values that round to 10

[edit]

In the article on Arithmetic Precision an example is given of the value of the number 12.345 to different numbers of significant figures.

But it is not clear to me what the answer is, when the number of significant figures = 1, for the following numbers:

9.5 is the answer 1 or 10?

98.4 is the answer 1 or 10 or 100?

I am using common rounding - i.e. round up when leftmost discarded digit is greater than or equal to 5.

Thanks!

KenlayZA 02:04, 29 August 2007 (UTC)[reply]

9.5 to one significant figure is 10, and 98.4 to one significant figure is 100. In the first case, 10 is the answer because it is just as close as 9 (and as you say, 5s round up by convention), and much closer than 20. In the second case, 100 is the answer because it is closer than 90 and much closer than 200. —Keenan Pepper 02:30, 29 August 2007 (UTC)[reply]
Thanks very much for this clarification.
KenlayZA 02:57, 29 August 2007 (UTC)[reply]
You picked some bad numbers in your choices, I am not sure you understand it fully. For example 9.5 would be 9 or 10, can't be 1. 98.4 would be choice of 90 or 100. Can't be 1 or 10. The significant digits means the numbers of digits that are not zero. It has nothing to do with being only able to pick 1 or 10 or 100--Dacium 06:14, 29 August 2007 (UTC)[reply]
"digits that are not zero" deserves clarification, as it's quite wrong as stated. Leading zeroes left of the decimal point are dropped, as normal, and are not significant (e.g. "00011" is "11"). Zeroes left of the decimal point when no decimal point exists are not significant (e.g. "100" has 1 significant digit). Similarly, Zeroes right of the decimal point when no non-zero digits exist left of the point are not significant (e.g. "0.001", or ".001", has 1 significant digit). All other zeroes are significant. "100." is 3 significant digits. "1.0010" is 5 significant digits. — Lomn 14:54, 29 August 2007 (UTC)[reply]
And for odd cases, such as "100 with two significant digits", we render it as 1.0 * 10². — Lomn 14:58, 29 August 2007 (UTC)[reply]
Similarly, the excruciatingly correct way to write "9.5 to one significant digit" is not "10" but "1*101" – or "1e1". (My dad, who teaches physics, tells me that his students increasingly use the latter notation, 'cause it's what they read off their calculators. I wonder where it was first used; Fortran?) Sometimes a zero is significant. —Tamfang 02:54, 2 September 2007 (UTC)[reply]

Google Calculator bug?

[edit]

How come Google Calculator thinks distance^2 and distance^-2 are compatible units? See, for example, its response to "1 m^2 in m^-2". Is this a bug or am I going crazy? How do I report a bug in Google Calculator? —Keenan Pepper 03:14, 29 August 2007 (UTC)[reply]

I don't know, but when I typed in "4^-2" it gave back 0.0625 which is correct. Maybe you are using 1, which will always gave you 1 e.g. 1 = 1^2 = 1^-2? Try a different number. Or maybe I don't understand the question. Micah J. Manary 04:08, 29 August 2007 (UTC)[reply]
You don't understand the question. Google Calculator has a built in unit conversion function, which is supposed to recognize when units are compatible and automatically convert between them if they are. It's very useful for physical problems. The problem is that m^2 and m^-2 are incompatible units, yet Google Calculator converts between them, meaninglessly. —Keenan Pepper 05:39, 29 August 2007 (UTC)[reply]
I've found the same thing, and it doesn't matter if you try and switch units (e.g. try 1 m^2 in feet^-2), but it doesn't work if (a) the exponent is 1 (e.g. 1 m in m^-1, or even 1 m^1 in m^-1), (b) the exponents don't match (e.g. 1 m^-1 in m^2), or (c) the dimensions are unrelated (e.g. 1 m/s in ft), which are all as it should be. Weird. Confusing Manifestation 04:20, 29 August 2007 (UTC)[reply]

I challenge you to convert 7 m^2 (aka seven square metres) to m^-2 See if you can do it manually. 202.168.50.40 04:45, 29 August 2007 (UTC)[reply]

That's like saying "I challenge you to convert 7 seconds to amperes", or even "I challenge you to convert 7 apples to notes on a pentatonic scale". It's not a case of "you can't do it manually", it's a case of "you can't do it at all because they're completely different things". Confusing Manifestation 05:20, 29 August 2007 (UTC)[reply]


Google appears to think that since 1^2 = 1^(-2) that it can carry across to multiples of both sides ?--Dacium 06:10, 29 August 2007 (UTC)[reply]

Continuing ConMan's observations, in fact it only happens when the exponent is 2 (not 3 or anything). It also gives 2 m^2 = 0.5 m^(-2). This is definitely a bug. What's worse, the good folk at Google seem unaware of the possibility that they might have a bug - I did not see in http://www.google.com/contact/ any information about how to send bug reports. -- Meni Rosenfeld (talk) 08:48, 29 August 2007 (UTC)[reply]
people need to realize that Google's perpetually in beta. Gzuckier 16:35, 29 August 2007 (UTC)[reply]
Isn't a bug reporting mechanism the most important part of a Beta? -- Meni Rosenfeld (talk) 10:56, 30 August 2007 (UTC)[reply]
Maybe this page will be helpful. --KSmrqT 11:21, 30 August 2007 (UTC)[reply]

It isn't all that wrong to claim that 2 m2 = 0.5 m−2. There's no deep inconsistency in treating reciprocal units as equivalent, except in the degenerate case of no units (all exponents zero). That said, this is pretty clearly unintended behavior. -- BenRG 12:03, 31 August 2007 (UTC)[reply]

What? How is it not inconsistent for to be equal to ? Even if they are equal (say, if m is taken to mean 1), then since . -- Meni Rosenfeld (talk) 15:53, 31 August 2007 (UTC)[reply]
Well, 2 m2 = 0.5 m−2 if m = 1/√2. But that's not what I meant, of course. What I meant is that it's just as reasonable to ask for "2 meters in diopters" or "2 ohms in mhos" as to ask for "2 meters in feet". These are, in a practical sense, all interconvertible units. I wonder if this quirk of Google Calculator is a side effect of allowing some unit conversion of this sort, though I can't think what it could be. (It understands both ohms and mhos but won't convert between them.) -- BenRG 00:31, 1 September 2007 (UTC)[reply]

Distributing points on a sphere

[edit]

I want to distribute points evenly on a sphere. How would I find their positions? I'm not sure how to best define "evenly". Here's what I imagine for the first few :

Placement of points
1 Anywhere.
2 Opposite to eachother.
3 Evenly along a great circle.
4 At the vertices of an inscribed tetrahedron.
(added later)
5 Equilateral triangle on the equator, two at the poles.
6 Regular octahedron.
7 Regular pentagon on the equator, two at the poles.
8 Two squares in parallel planes, rotated 45° relative to each other. Not a cube.

How can this be extended to any in a systematic fashion? —Bromskloss 14:28, 29 August 2007 (UTC)

Maybe "the maximum distance between the points" is the better definition of what you want. wildie · wilđ di¢e · wilł die 14:44, 29 August 2007 (UTC)[reply]

I don't believe any N is possible, 5 isn't, 6 is octahedron, 7 isn't (as far as I know), 8 is cube, 9,10,11 isn't (I think), 12 see dodecahedron, 20 see icosahedron. For everything else - you need to distinguish between 'perfectly evenly' (not possible) and 'practically' evenly (possible). In short see platonic solids and consider their vertices87.102.18.14 15:21, 29 August 2007 (UTC)[reply]

One possible definition of "evenly distributed" is a configuration of the points that maximizes the minimum distance between any two points. Another way of expressing is that you want to know, given n, the largest radius r such that n spherical caps of radius r can be packed on a sphere of unit radius; the configuration of the centres of such a packing is the answer. We have articles on Sphere packing and Packing problem, neither of which considers this problem. For what is known on the topic, see http://www.math.niu.edu/~rusin/known-math/95/sphere.faq. Although fairly old, I don't think there have been any breakthroughs since.  --Lambiam 15:40, 29 August 2007 (UTC)[reply]
The best technique around that I know of uses spirals, and works very well. Check this page: http://sitemason.vanderbilt.edu/page/hmbADSKieff | Talk 15:46, 29 August 2007 (UTC)[reply]
A friend of mine worked on minimizing the potential energy of n electrons on S24, which apparently has connections to the Leech lattice, but it was for a government agency and thus is classified. =( Also, Lambiam, your link includes the following: A couple of theorems are relevant here, I guess: first, given any group G there is a group of rotations in SO(n) isomorphic to G as long as n is sufficiently large. Does anyone have a reference for this surprising result? (Or, if it's false, what is the appropriate restriction? Finitely generated group? Lie group? Finite group?) Tesseran 16:56, 29 August 2007 (UTC)[reply]
See Min-Energy Configurations of Electrons On A Sphere. I added some to your table. —Keenan Pepper 17:51, 29 August 2007 (UTC)[reply]

Thank you for your answers (please keep em coming if you want)! There seems to be no obvious solution to this problem, but several satisfactory ones. I have glanced at the links and I see there's a lot of reading available. For numerical computations on a sphere (which I had in mind), perhaps triangles are what I should ask for. Am I right? Oh, and electrons in sounds very wierd! —Bromskloss 18:03, 29 August 2007 (UTC)[reply]

Oh if you want triangles I recommend starting with an icosahedron (20) then you can subdivide each triangle into 4 more triangles (half the length), don't forget to re-adjust the vertices so that they are an equal distance from the centre. This doesn't give a perfectly equal set but the triangles are fairly similar in size (within 5% I think better than that). You can repeat the process or the new triangles 'ad nauseum' getting smaller and smaller triangles - in fact 20x4n (n is a positive integer) - it's a very simple method and easy to do on a computer - if that was the sort of thing you were thinking off.87.102.18.14 18:28, 29 August 2007 (UTC)[reply]
(If you don't like the icosahedron there are other starting options such as Truncated icosahedron for one, first convert the hex and pent faces into triangles, then repeat the process above)87.102.18.14 18:30, 29 August 2007 (UTC)[reply]

I think maybe we should ask 'what sort of numerical calculations on a sphere?' to be able to help more.87.102.18.14 19:05, 29 August 2007 (UTC)[reply]

You could also consider Geodesic domes. The articles week on the maths but you have a large number of possible configurations possible. --Salix alba (talk) 20:48, 29 August 2007 (UTC)[reply]
Ah, I was actually trying to find that article, but couldn't remember what they were called. So, what are the restrictions when using these? —Bromskloss 20:58, 29 August 2007 (UTC)[reply]
Euler's theorem V-E+F=2 is a limiting factor, this says that not all verticies of the dome can have 6 adjacent edges, some must have only 5 edge. I think 87.102.18.14 is close to their construction: start with a Archimedean solid and sub divide the faces into triangles. Magnus Wenninger's spherical models discuss 2,3 and 4 frequence icosohedron, where each edge of the triangle is divided up into 2,3 and 4 segments, these points are then connected to form a mesh of smaller triangles. For an n-frequency icosohedron you would have 20 n2 faces.
Wenninger also does the same process with dodecahedron and Snub dodecahedron. --Salix alba (talk) 22:23, 29 August 2007 (UTC)[reply]
Anton Sherwood has collected many helpful sources. --KSmrqT 04:32, 30 August 2007 (UTC)[reply]
Thanks for saving me the trouble of tooting my own horn ;) —Tamfang 02:40, 2 September 2007 (UTC)[reply]
As a side note, the distribution of points you described for N = 8 forms the vertices of an inscribed square antiprism. There's a note in the article about this very topic. Maelin (Talk | Contribs) 06:01, 30 August 2007 (UTC)[reply]
Dweller's thread of the week. It's an 'out of the box' idea.

Congratulations to all contributing here. You have won the second ever User:Dweller/Dweller's Ref Desk thread of the week award. Good job. I even understood (most) of this thread, which as a mathematical numbskull, is quite an achievement, given the apparent complexities of this problem. Heck, I even found it interesting. And that, believe me, is quite a shock. Well deserved. --Dweller 11:02, 31 August 2007 (UTC)[reply]

I believe it's been proven that there is no solution after N=20 for exactly equidistant points. When I needed to solve this problem, I did it iteratively (i.e. 1. create a bunch of random unit-length vectors. 2. for each vector, find the three closest and "nudge" the vector away from them slightly, then renormalize it to unit length. 3. repeat 2 gradually decreasing the nudge distance.), and that seemed to work well. (There's a neat applet by Ken Perlin that appears to do something like this.) - Rainwarrior 16:34, 31 August 2007 (UTC)[reply]
Any of the Platonic or Archimedean solids has points spaced at exactly equal intervals; the number of vertices can therefore be 4, 6, 8, 12, 20, 30, 24, 48, 60, 120. And then there are the prisms and antiprisms, with 2n vertices. —Tamfang 06:35, 2 September 2007 (UTC)[reply]
I don't think Archimedean solids count. They have a regular configuration (i.e. any point is connected to its nearest points in the same way as any other) but the configuration is lopsided. For a Platonic solid not only are the connected points all the same distance from the vertex, but any two neighbouring edges also have the same angle; this is not true for the Archimedean solids. (The prisms/antiprisms exaggerate this, and it should be fairly obvious that they would not be an optimal solution for maximizing distance between points... excepting the prism cube and antiprism octahedron.) - Rainwarrior 07:31, 2 September 2007 (UTC)[reply]
More precisely the 2-, 3- and 4-antiprisms. None of the prisms is optimal for packing. (The 2-antiprism is the tetrahedron.) —Tamfang 09:00, 2 September 2007 (UTC)[reply]

Perhaps I'll start a WP article on this subject. What should the title be? —Tamfang 06:35, 2 September 2007 (UTC)[reply]

"Spherical point distribution"? I don't know, it's hard to think of a name for this that's not ambiguous. - Rainwarrior 07:31, 2 September 2007 (UTC)[reply]
"Spherical code", perhaps, if that's not too narrow. —Tamfang 09:16, 3 September 2007 (UTC)[reply]
Where does the word "code" come from and what does it imply? - Rainwarrior 16:38, 4 September 2007 (UTC)[reply]
According to Mathworld, spherical code is a synonym for a packing of discs on a sphere. (I misremembered that it has a more arcane meaning.) A spherical design is a set of points useful in numerical integration; I gather from Sloane that such sets have some qualities in common with the arrangements discussed here. —Tamfang 20:42, 5 September 2007 (UTC)[reply]
"Spherical arrangement"? —Tamfang 20:42, 5 September 2007 (UTC)[reply]

Determinant sign

[edit]

I want to find whether the determinant

det( I + exp(A)) is positive or negative.

(A is an NxN matrix). N can be as large as 18. Is there a way to find its sign without actually calculating the determinant. The determinant is extremely large and apparently Matlab fails to calculate it correctly. For example, on adding a negligibly small random "error" in A, the determinant changes completely. Examples

>> stabval = det(eye(size(intA)) + expm(intA + 1e-12*rand(size(intA))))

stabval = 4.8858e+064

>> stabval = det(eye(size(intA)) + expm(intA + 1e-12*rand(size(intA))))

stabval = -2.6021e+063

>> stabval = det(eye(size(intA)) + expm(intA + 1e-12*rand(size(intA))))

stabval = -1.7182e+064


The elements of intA are:

intA =

 1.0e+003 *
   0        0.0021   -0.0015   0    0    -0.0000
   0.0002   0        -0.0000   0    0    -0.0000
  -3.3873   -0.0004   0        0    0     0
  -0.0001   0.0005    0        0    0    -0.0001
   0.0005   0.0001    0        0    0     0.0005
   0        0         0.0010   0    0     0


Why does the value of the determinant vary so wildly for a negligible change in the matrix? An answer I got is that "det" is fundamentally a very very numerically sensitive thing to compute. [1] Any ideas for finding whether the determinant is positive or negative ? Regards, deeptrivia (talk) 18:04, 29 August 2007 (UTC)[reply]

I'm afraid the matrix you're attempting to compute the determinant of is extremely ill-conditioned: all row vectors point in the same direction.  --Lambiam 21:20, 29 August 2007 (UTC)[reply]
Here are some interesting observations. Matlab computations yield:
  • rank(intA) = 6
  • rank(expm(intA/X)) = 6 for all X > 4.975109
  • rank(expm(intA/X)) = 5 for 2.236124 < X < 4.975109
  • rank(expm(intA/X)) = 1 for all X < 2.236124

Is this a happening due to numerical errors like roundoff (since numbers are very large) or can it really happen? Should the rank matter for computation of a determinant? Is there a way to solve this problem? Can some kind of pre-conditioning help? deeptrivia (talk) 03:32, 30 August 2007 (UTC)[reply]

If the rank is smaller than the size, the determinant is 0. The sign of the numerically computed determinant, using floating-point arithmetic, is then random noise. The entries in expm(intA) are so large that adding I does not make a practical difference. Note that the proper powers of intA have rank 3; in the power series expansion of the matrix exponential, the terms Xk/k! for k in the range 50 to 100 dominate the scene. For k ≥ 13, the odd powers have higher max entries than the neighbouring even terms.  --Lambiam 08:18, 30 August 2007 (UTC)[reply]
I found a paper about additive preconditioning for finding determinants of ill conditioned matrices. They talk about generating two random matrices U and V of size Nxr, (r<N) and then using U*transpose(V) as an additive preconditioner. The condition is that U*transpose(V) should be well-conditioned. To my surprise, for any random U and V, U*transpose(V) is always ill-conditioned. Any reason why? deeptrivia (talk) 22:24, 30 August 2007 (UTC)[reply]
No idea; it shouldn't be, so perhaps you're doing something wrong. See below.
Here is another idea I had, which however seems not to work as I had hoped. Use multiplicativity: det(AB) = det(A)·det(B) and exp(A)−1 = exp(−A) to rewrite
det(I + exp(A)) = det(exp(A))·det(I + exp(-A)) = exp(tr(A))·det(I + exp(-A)).
My hope was that I + exp(-A) would be better conditioned, but this appears at first sight not to be the case. I understand your intA is just an example, but is its structure typical of the general case? Perhaps (just perhaps) if I knew the background where this matrix and the expressions come from, and why you want to know the sign of this determinant, I'd have a better idea.  --Lambiam 22:47, 30 August 2007 (UTC)[reply]
I missed something. If U and V have rank < N, then both U and V (and therefore VT) are singular, and the product of two singular matrices (or in fact one singular matrix with any other matrix) is again singular.  --Lambiam 22:56, 30 August 2007 (UTC)[reply]
The example you give for intA has a single entry which is more than a 1000 times larger than any of the other entries in the matrix. Is this true in the general case? If so, you might want to call the bad entry x. Then your problem is to calculate the exponentinal of a matrix with polynomial entries. There may be some techniques for this. Stefán 23:14, 30 August 2007 (UTC)[reply]


This is related to the uniqueness of two point BVP solutions problem that I had asked before. It turns out that for my system, I have


if det(I + expm(A)) = 0, then I have infinitely many solutions for the BVP. The functions of 's' are only available in numerical form (as arrays of values vs. 's'). This 6x6 problem is a simple case of the bigger 18x18 problem that I want to eventually solve.

I also noticed that the problem might actually be in the calculation of exponential. For example, I get:

K>> expm(intA)*expm(-intA)

ans =

 1.0e+047 *
  -0.1310    0.0029   -0.0027         0         0   -0.0000
  -0.0003    0.0000   -0.0000         0         0   -0.0000
   6.3674   -0.1428    0.1300         0         0    0.0000
   0.0000   -0.0000    0.0000    0.0000         0    0.0000
  -0.0003    0.0000   -0.0000         0    0.0000   -0.0000
   0.0905   -0.0020    0.0018         0         0    0.0000

Regards, deeptrivia (talk) 23:59, 30 August 2007 (UTC)[reply]

The problem might have to do with precision in matlab, which is only up to sixteen places of decimal in addition. E.g., matlab gives

>> (1e15+1) - 1e15

ans =

    1

BUT

>> (1e16+1) - 1e16

ans =

    0

The calculation of exponent and determinant might be suffering from such issues because of the big range of numbers. Is there a workaround, or a data structure with more bytes per number that can deal with a higher range of exponents? I know one solution is to convert everything to symbolic data type to get arbitrary precision, but symbolic calculation is very very slow. deeptrivia (talk) 01:13, 31 August 2007 (UTC)[reply]


"I missed something. If U and V have rank < N, then both U and V (and therefore VT) are singular, and the product of two singular matrices (or in fact one singular matrix with any other matrix) is again singular.  --Lambiam 22:56, 30 August 2007 (UTC)"

I agree with you. Page 5 of this paper appears to be saying that the rank of UVT is r (not N), but it is still well-conditioned. What does that mean? deeptrivia (talk) 02:47, 31 August 2007 (UTC)[reply]
I think the paper says that A + UVT is well-conditioned (or, at least, better conditioned than A itself), and not that UVT is well-conditioned. -- Jitse Niesen (talk) 07:25, 31 August 2007 (UTC)[reply]

The exponential of a matrix is always nonsingular. That follows from the formula in that article. So, when Matlab says that "rank(expm(intA/X)) = 5 for 2.236124 < X < 4.975109", then Matlab is wrong; the rank should be 6. It follows that you cannot trust Matlab when X is smaller than 4.97...; the problem is the limited precision. In fact, as you said, Matlab cannot even accurately compute expm(intA) — which is not that surprising, because your matrix has an entry in the order of a thousand, and e1000 is a huge number.

This is irrelevant here, but a further problem is calculating the determinent. As you said, this is ill-conditioned. This should suggest that you shouldn't try to compute it. If you want to find out whether a matrix is singular or not, compute its rank in Matlab (which uses the singular value decomposition). However, in this case that won't work, because you can't even compute expm(X).

Sorry, I can't really help you; I can only say that it's not an easy problem. -- Jitse Niesen (talk) 07:16, 31 August 2007 (UTC)[reply]

I seem to have run out of ideas – other than doing the computations in multi-length arithmetic, with increasingly large numbers of digits, until the results stabilize.  --Lambiam 13:35, 31 August 2007 (UTC)[reply]
Found something that works: expm(intA/128)^128 works just fine. The factor 128 is chosen to make the norm of the matrix < 1. But even this stops working after a point (when the power factor required becomes too large). deeptrivia (talk) 21:14, 1 September 2007 (UTC)[reply]

name of solid

[edit]

What is (if any) the name of this solid.. It is like a Truncated icosahedron in that it has 12 pentagonal faces - but in this case the edges of the pentagons (closest) are parallel. Hexagons (30 of)(non-regular) bridge the pentagonal edges - it has 42 sides.

An alternative construction would be to expand an dodecahedron (explode so that each face travels perpendicualr to it's plane), then insert hexagons into the gaps (3 hexagons meet at the 20 vertices of a dodecahedron).

Therefor each hexagon has two sides (opposite) joining a regular pentagon, and the remaining 4 touch hexagons (4) Please help...87.102.18.14 18:39, 29 August 2007 (UTC)[reply]

You could call it an alternated rhombic triacontahedron, though the WP article on it is instead called truncated rhombic triacontahedron. —David Eppstein 18:54, 29 August 2007 (UTC)[reply]
Excellent, very quick, thank you.87.102.18.14 19:01, 29 August 2007 (UTC)[reply]
P.S. a correction: it turns out that alternation is a different kind of truncation in which the faces introduced in the truncation process are adjacent to each other, not true in this case. But truncated rhombic triacontahedron is still ok. In the fullerene literature it's known as C80(Ih). See the TRT article and its talk page for details. —David Eppstein 00:56, 30 August 2007 (UTC)[reply]
ok I admit I didn't check the truncation - it looked possible and I left it at that. The absence of pentagons in rhombic triacontahedron made me doubt but..
One look at the second gave me my answer, and I learnt that there is a term for what I know know to be called 'alternation', I still owe you my thanks, thanks.87.102.14.233 08:38, 30 August 2007 (UTC)[reply]