Wikipedia:Reference desk/Archives/Mathematics/2013 January 19

From Wikipedia, the free encyclopedia
Mathematics desk
< January 18 << Dec | January | Feb >> January 20 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 19[edit]

Riemann hypothesis for curves over finite fields[edit]

Okay so I've been reading this document and I understand a lot of it but not all;

http://www.math.ucdavis.edu/~osserman/math/riemann-elliptic.pdf

Firstly, Is aq the same as the value of a in the Weierstrass equation or is it completely irrelevant of it? Secondly why does it equal q-Nq? And also where does the 2rootq come from? I assume it's something to do with q being the expected value, aq being the variance and Nq?

I think there is no connection between and in that paper (except, of course, that if one knows , then one can in principle compute ). The I think is not motivated by any kind of probabilistic considerations, but rather because it is this estimate which is equivalent to the local Riemann hypothesis. Sławomir Biały (talk) 02:48, 19 January 2013 (UTC)[reply]
The idea is that you can count the number of points your curve has over the finite field by looking at the trace of the Frobenius endomorphism acting on a relevant vector space (how complicated a vector space depends on the curve... for an elliptic curve, you can get away looking at the Tate module), by a version of the Lefschetz fixed point theorem in this setting.
Forgetting for a second how you set it all up, you want to compute , the trace of the Frobenius endomorphism F acting on some vector space V over a certain field of characteristic 0. But you might know that the trace of a linear map is just the sum of its eigenvalues.
Here's then how everything fits in: the polynomial that appears in the formula is the characteristic polynomial of F acting on V, , and thus its roots are (up to taking inverses because of the different convention) the eigenvalues of F.
Now, the Riemann hypothesis for curves over finite fields precisely said that the roots of this polynomial were of the form q-s for s a complex number of real part 1/2.
This then means that the trace of F is the sum of 2 g numbers (where g is the genus of your curve, which is 1 in the case of an elliptic curve), each of absolute value . In particular, its own absolute value is less than or equal to .
The final formula for the number of points of your curve over is then, according to the Lefschetz fixed point formula, , and so this is within of . (The difference of +1 with the quoted article is the distinction between affine and projective, i.e. here I am counting an extra point at infinity which is the identity for the group law on the elliptic curve in the genus 1 case with the Weierstrass equations.)
You might also want to take a look at my answer to your previous question for clarifications.
Hope that helps. -SamTalk 08:53, 19 January 2013 (UTC)[reply]

I see. I think. So in the paper, the f(u) function with the roots u1 and u2 is derived from the frobenius endorphism?

Yes, that's it. It's the characteristic polynomial of the Frobenius endomorphism, except that article doesn't tell you the full story, as it doesn't tell you what vector space it's acting on or anything, it just tells you the characteristic polynomial from thin air. In fact, in the article, is the characteristic polynomial of the Frobenius (acting on some mysterious two dimensional vector space, or 2 g dimensional in the general genus g case), whereas the numerator of the zeta function, which I'll keep calling , is the characteristic polynomial , so that's just a slightly different convention. In any case, this means is the trace of Frobenius, and q is the determinant of Frobenius.
If you care about what this vector space is, which this paper is hiding under the blanket, I'll just say that you turn out to have more than one choice. The most convenient would to have some nice explicit vector space over the rational numbers , but it turns out it isn't exactly so simple... in fact, what you get is a compatible system of vector spaces , one for each prime number . For (where p is the characteristic of your original field ), this is the Tate module of your elliptic curve, , where is the inverse limit of the -torsion subgroups of your elliptic curve E over the algebraic closure . In the general genus g case, you can take the Tate module of the Jacobian variety of your curve, but the way that naturally generalises to other varieties besides curves (surfaces, etc) is to take the -adic étale cohomology of your variety. For , things are even more complicated, as showcased by the complexities of p-adic Hodge theory, and you have to substitute étale cohomology for crystalline cohomology.
I know that was a bit technical, but the thing to remember here is the case of elliptic curves, where basically you build up a vector space from the torsion subgroups of your elliptic curve: your elliptic curve has a group structure, and you look at points x on the curve such that (these are the -torsion points). This is the subgroup , and if you know about elliptic curves over the complex numbers just being complex tori you'll know that (over the complex numbers!) is isomorphic to . This is true if you look at all points over the complex numbers , but is in fact also true over the algebraic closure , as long as . If , it's more complicated to do the right thing.
The picture to have in your head is that you build an elliptic curve like a topological torus, by gluing opposite edges of a rectangle... except to take into account the complex structure you have to do this while preserving angles, so it matters whether you use a rectangle or instead a tilted parallelogram... and you get this picture (sorry, I just made it in Inkscape): you're identifying opposite sides by a translation of the complex plane, and the group law is addition of points in the complex plane except you translate back within this fundamental parallelogram, imagining translate copies of this parallelogram tiling the plane. So the points depicted are exactly the points which, when you add them to themselves 4 times, you get the origin (any vertex of the parallelogram, they're all identified after gluing). You can see you have 16 points (the ones on each edge are identified with the ones on the opposite edge), and in fact a copy of the abelian group : you are essentially labelling points by pairs and counting modulo 4 in each factor; and the label corresponds to the point with complex coordinates , where is the complex coordinate of the top-left vertex of the parallelogram.
So anyway, you put together all these torsion subgroups and bundle them up, by doing a kind of decimal expansion (in fact a p-adic expansion!), and you get , in the same way as you can build out the -adic integers by doing (you are taking an inverse limit). Then all of these subgroups have an action of the Frobenius endomorphism on it, because remember these were only defined over the algebraic closure , and the fundamental structure of finite fields (and their Galois groups) means that this is permuted around by Frobenius (which is on coordinates just the map ), and the stuff that isn't permuted around is precisely the finite field you started with (essentially because of Fermat's little theorem). So in the end, you obtain with an action of the Frobenius endomorphism F, and by allowing fractions you get the required vector space which is . I've played fast and loose by not distinguishing between the picture of the complex numbers (where you have this lattice, and tiling of the complex plane) and the picture over the finite field, but everything goes through as long as .
That takes you through the construction of these vector spaces. It might take a while to digest, but if you think about it enough you see it kind of is the only choice you have! You bundle up these lattice points like that and look at how Frobenius acts on them.
Sorry if that was a bit too much! -SamTalk 08:36, 20 January 2013 (UTC)[reply]

prime ideal extension and contraction[edit]

f:A->B a ring homomorphism. If I have an ideal p of A which is prime. And I extend it and get an ideal p^e of B. If p^e is also prime, does it follow that p^ec=p? --helohe (talk) 04:00, 19 January 2013 (UTC)[reply]

I don't think so because the identity implies that every prime ideal of A comes from prime ideals of B, which is not necessarily the case. More precisely,[1]
is in the image of if and only if .
(note to myself, cite this somewhere in Wikipedia.)
-- Taku (talk) 17:25, 19 January 2013 (UTC)[reply]


Thank you. There is something else in Atiyah–MacDonald, that just confused me. The words onto and into seem not to be used consistently. For example many times onto means surjective. But for example in the definition of ring homomorphism, into is used even though it is not meant to be injective. --helohe (talk) 22:13, 20 January 2013 (UTC)[reply]
I don't think "into" necessarily imply injective; on the other hand, "onto" usually implies "surjective". Usually, the context makes it clear whit is meant. -- Taku (talk) 13:51, 21 January 2013 (UTC)[reply]

Running time of APR-CL[edit]

In a book, I found the APR-CL "has running time n raised to the power log log n." Does it mean n^(log(log n))?

Later, this book says log log 10^100 is about 230, but actually log 10^100 is around 230, not log log 10^100.

Do you think the writer or editor is confused?


I am not a mathematical expert but a translator. Please help in plain English. --Analphil (talk) 08:15, 19 January 2013 (UTC)[reply]

Yes. Yes. Double sharp (talk) 10:02, 19 January 2013 (UTC)[reply]

Estimating a categorical distribution[edit]

Consider the set of K-dimensional vectors whose entries are all elements of the set . Show that , where is the number of times appears in . --AnalysisAlgebra (talk) 19:20, 19 January 2013 (UTC)[reply]

just to check, this together with your follow on question below isn't your homework is it? ---- nonsense ferret 12:52, 20 January 2013 (UTC)[reply]
No, they are not. --AnalysisAlgebra (talk) 13:58, 20 January 2013 (UTC)[reply]
Note, please, left and right angle brackets are distinct from less-than and greater-than operators. The are denoted with LaTeX commands \langle and \rangle, respectively, and they render as rather than --CiaPan (talk) 10:59, 21 January 2013 (UTC)[reply]
Although personally I think the acute angle looks better. The problem with the less-than and greater-than signs is not so much the glyph itself as the spacing. Possibly there's a happy medium somewhere in between, but I really don't like and much. --Trovatore (talk) 11:04, 21 January 2013 (UTC)[reply]

Geometric figure Extension.[edit]

(I'm not really sure what to call this, if anyone has a better title, feel free to alter).

Let S be a closed subset of the plane. Define S' as following. S' = the set of all c where a,b are in S and Distance ab = Distance bc and abc make a straight line. I'm looking at this generally for the filled in Polygons. If S is an *even* sided regular polygon then S' is exactly 9 times the area of S. In fact with a four sided polygon, the ratio is 9 as far as I can tell even in the cases of Rectangles or Parallelograms (this isn't affected by stretching or skewing). It is also a ratio of 9 for a circle. However for S as one the *odd* (2n+1) sided regular polygons with side d, S' turns out to be the polygon with twice the number of sides (4n+2) alternating between sides d and 2d (with all angles the same). If S is a regular (equilateral triangle) then S' has an area 13 times that of S (can be diagrammed on a triangular grid. As the number of odd sides gets bigger it gets close to the circle and as such gets closer to 9.

Note, if T is just the border of S then T' = S' without S. (so if S is a square of side length d, then T' is a 3d by 3d square missing the center square.

So I'm looking for three things.

  • The first is an equation of the ratio of a regular polygon of 2n+1 sides of length d to a polygon of 4n+2 sides alternating between those of length d and those of length 2d. (this would have value 13 for n=1 and go to 9 as n increases.
  • The second is generally whether things get odder for things that aren't polygons.
  • The third is what branch of Mathematics does this fall in, some branch of Geometry, Topology or something else?

Naraht (talk) 21:18, 19 January 2013 (UTC)[reply]

Yes, things get more complicated for less regular figures.
  • Take an L-figure, a concave shape obtained by removing a 1/4 part of a 1-by-1 square from its corner, say top-right corner. A result of extending you described would be a 3-by-3 square with two smaller squares subtracted: 1-by-1 from top-right corner and half-by-half from bottom-left. The resulting figure's area is 10 13 times that of the initial figure.
  • For a disk (i.e. an area anclosed by a circle) the result is a disk with radius enlarged 3 times, so the area grows 9 times.
  • However for a circle with radius R (i.e. a closed curve itself) the resulting figure is an annulus with radii R and 3R. The area ratio here is infinite, as the starting figure has area equal zero.
CiaPan (talk) 08:45, 21 January 2013 (UTC)[reply]
  • I also found a 10 13 ration for removing a triangle from a domino (so it is made of three 45-45-90 triangles). I wonder what the shape is for two squares touching at a corner...
  • Yes, in that regards, a disk is sort of an infinite polygon.
  • Annulus was what I meant with the "T is just the border of S..." above.Naraht (talk) 13:12, 21 January 2013 (UTC)[reply]
'I wonder what the shape is for two squares touching at a corner...'
Let them be 1-by-1 squares, placed in Cartesian coordinates so that they touch in (0, 0) and they lie in 1st and 3rd quadrants of the coordinate system. Then the extension of the figure is a union of all unit squares which all vertices have both coordinates integer and satisfying:
|x| ≤ 3
|y| ≤ 3
|xy| ≤ 3
The extended figure consists of 24 such unit squares. --CiaPan (talk) 13:37, 21 January 2013 (UTC)[reply]

Independence number of graph: bound similar to Lovasz number?[edit]

Hello,

I was reading on the independence number of a graph, and I noticed that in some works, the following upper bound is obtained using linear algebra:

If A is any real symmetric, indexed by the vertices of the graph, and c a positive scalar such that:

 if  is not adjacent to , and
 is positive semidefinite ( is the all-one matrix)

then c is an upper bound on the independence number of the graph.

The proof is very short, and relies on considering for an appropriate (0,1)-vector, based on an independent set.

Is there a name for this bound? It reminds me of the Lovász number, where one reads:

Let B range over all n × n symmetric positive semidefinite matrices such that
bij = 0 for every ij ∈ E and Tr(B) = 1.
Here Tr denotes trace (the sum of diagonal entries)
and J is the n × n matrix of ones.
Then[2]

Note that Tr(BJ) is just the sum of all entries of B.

However, I do not know for sure if these bounds are equally strong. Where can I get more information on this? Many thanks, Evilbu (talk) 22:09, 19 January 2013 (UTC)[reply]

  1. ^ Atiyah–MacDonald 1969, Proposition 3.16
  2. ^ See Theorem 4 in Lovász (1979).