Jump to content

Talk:Integer factorization/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Blum Blum Shub

Actually i don't know a lot about BBS, but i rather suspect it would be difficult to break if you don't know the modulus. So even if factorisation is easy, if you don't tell people the modulus, BBS might be safe. Does anyone know if this is true? I vaguely remember there are some proposed uses of BBS where you do tell people the modulus, so a quick factor method would at least reduce the usefulness of BBS.

If the discrete log problem can be solved quickly, factorization can also be done quickly.. this has been proven.
As for the BBS. Breaking BBS is provably as hard as factoring the Modulus. If you had a huge modulus and it was secret then i guess it'd be incredibly hard to break.

— Preceding unsigned comment added by 217.35.21.249 (talk) 01:01, 23 November 2002 (UTC)

Significant fields

The article says:

This problem is of significance in mathematics, cryptography, complexity theory, and quantum computers.

Is it really of significance "in" complexity theory and quantum computers? I would interpret this as meaning that a deeper understanding of integer factorization would lead to a deeper understanding of complexity theory (or quantum computers). I'm not sure that this is true (unlike the case for mathematics or cryptography).

— Preceding unsigned comment added by Cwitty (talkcontribs) 23:01, 21 November 2003 (UTC)

Is the recently added external link by 136.1.1.154 really necessary? It doesn't really aid comprehension of the topic, and to be honest, I find it to be an insult to the reader's intelligence. Decrypt3 01:09, Jul 30, 2004 (UTC)

I agree it's not particularly useful; I've removed it. — Matt 01:19, 30 July 2004 (UTC)

Unnecessary article?

There is a separate and unnecessary article called prime factorization algorithm that should be deleted. I don't see anything worthy of merging there. Anyone cares to volunteer?

I vote to delete. All it is, is an extended example of trial division. If anything, the example and source code should be merged into trial division. Decrypt3 21:20, 18 February 2005 (UTC)

Integer factoring (feedback)

Factoring is reducible to Discrete Log:

I will show for the case of n = p.q Generate random number 'a' co-prime to 'n' and random number x < n but very close to n. compute b = a^x mod n but don't tell x to the 'discrete log finding algorithm'. Instead ask it to find the discrete log of b (to base a) mod n. Let the value returned by the algorithm be y. There is a very high probability that y != x. If so x-y will be a multiple of the order of 'a' which can easily be used to factor. I will not go to the details of factoring once knowing the order or a multiple. (since this is a well known method). If we are unlucky and x=y then we start over.

ISTR that you are correct, factoring can be reduced to the discrete log problem in composites. However, cryptography mostly relies on the discrete log in Z*_p with p prime, or on other groups like elliptic curves. There's no reduction between these problems and factoring. — ciphergoth 10:45, 28 September 2005 (UTC)

This reduction seems hard.... I have this approach. Given n=p.q, try to find a number r such that s=2n.r+1 is prime. Thus s=2p.q.r+1. and we know r. Generate a, x <- Z_s and compute b=a^x mod s. Let y = DL(a, b, s). If the order of 'a' is a multiple of only one of p or q then we can factor n easily. I am not sure about the probabilty of finding a with the specific properties but it still appears that the problem is easy.

Another comment: NP-hard = (1) every NP problem reducible to it but (2) the problem itself is not known to be in NP. Factoring is in NP but (1) is possibly not true. So it is NOT NP-hard by either arguments

(2) is not a precondition for being NP-hard.
A problem is NP-complete iff it is in NP and NP-hard. Factoring is not known to be NP-complete, and TBH it's unlikely to be. However, this doesn't tell us whether or not it's in ~P. And that's by no means all we need to know to know for sure the security of cryptosystems based around it - for one thing, all of these things refer to worst-case performance, and it's average-case that matters for crypto.
Not just average-case, but it's entirely possible that P ≠ RP = NP, so probabilistic algorithms might solve factoring efficiently while it still lies outside P. But I digress. Deco 22:17, 1 February 2006 (UTC)

factor using ti-84

I have a program on my calc that can factor any number and give a sum of all factors. It only gives what goes into the number equally. If you plug in 28 it gives you 1,24 2,14 4,7 and then the sum (28). It works for numbers up to 999,999,999 for that is the max digits on it. I will post it soon. K-unit 20:56, 1 February 2006 (UTC)

here it is (:= enter)

ClrAllLists:ClrHome:{1}->L1:Input "ENTER NUMBER ",X:If X(greater than or equal to)1e10:Goto 1:1->Y:Disp "fast(1) or :input "slow(2)?",G:lbl 2:x/y->b:If B(is less than)(square root of)X:Goto 1:If B(does not equal)int(B:goto 5:Disp y,b:if B=x:Goto 9:B+y+L1->L1:lbl 9 if G=2:pause: Lbl 5:If x/2(does not equal)int(x/2:goto 19:1+y->y:goto 2:lbl 19:2+y->y:goto 2:lbl 1: disp "end",x:disp "sum of factors",L1:pause:clrhome:delvar x:getDtStr(1) That's all K-unit 22:13, 1 February 2006 (UTC)

I'd suggest we not place this in the article, as numbers of this size can be factored rapidly by even the slowest devices by trial division. More interesting are the algorithms used by CAS packages such as Mathematica and Maple and by the TI-89, which has a variant of the Derive symbol engine built into it. Mathematica's in particular are documented: "FactorInteger switches between trial division, Pollard p−1, Pollard rho and quadratic sieve algorithms.".[1] Mathematica also has a factoring function based on ECM. Deco 22:15, 1 February 2006 (UTC)

I agree. But it is useful for anyone who has a ti-83 or 84 and wants this program.K-unit 18:09, 7 February 2006 (UTC)

True, but Wikipedia is not a source code repository. A more appropriate forum might be ticalc.org. Deco 22:32, 7 February 2006 (UTC)

Correction needed?

In the Integer Factorization site it shows a time complexity for GNFS different than the complexity shown on the GNFS site itself. — Preceding unsigned comment added by 65.95.105.86 (talk) 01:45, 20 March 2006 (UTC)

Correction of runtime

See GNFS wiki for accurate runtime — Preceding unsigned comment added by Qc (talkcontribs) 14:14, 16 April 2006 (UTC)

Merge

A merge from Prime factor to this article is needed. It seems to be similar enough to be in the same article. Any objections before I merge? --Mets501talk • contribs 22:08, 25 April 2006 (UTC)

Yes. That article is about the fact that numbers have prime factors. This article is about the computational problem of integer factorization. I feel strongly that they should not be merged. — ciphergoth 09:32, 26 April 2006 (UTC)
It should probably be merged into Fundamental Theorem of Arithmetic, or the first section of Prime Number, instead. Walt 14:03, 27 April 2006 (UTC)
No problem. I'll remove the merge tag from this page. --Mets501talk • contribs 22:50, 27 April 2006 (UTC)

Trivial example

The factors at the beginning of the article are deliberately chosen to be trivial, so that the reader may verify that they are factors by mental arithmetic and thus understand the term better. If a non-trivial example is to be given, it should be right at the other end of the scale, like a recent factoring record. — ciphergoth 21:58, 28 April 2006 (UTC)

Section related to music to be deleted, and title of article slightly too general for intended subject

The intended subject of this article is the integer factorization problem, particularly the factorization of large numbers used in cryptography in RSA. The more general subject of integer factorization, including uniqueness of factorization and non-mathematical applications of integer factorization (usually of small integers), such as to music as in a recently added section on this topic, seems to go beyond the intended scope. Applications to music are inappropriate to an article about the hardness of factoring, and should be deleted.

Perhaps the article can be retitled to integer factorization problem, or integer factorization algorithms so that subject of the article is more precisely defined. A more general article can be kept under a title integer factorization or some variation. Otherwise, one can delete or revert more general discussion that get added to this article, or put them somewhere as needed. DRLB 17:12, 19 June 2006 (UTC)

Just go ahead and revert it. I was going to when I had a chance to write a note to the editor. By the way, it's his own technique he's talking about, but I was going to write a nice note anyway. Walt 17:20, 19 June 2006 (UTC)
I was suspicious that the cited source may have been written by the contributor, but noticed it's several years old. In any case I agree that this is the wrong article for it; if we were to keep it it would probably belong in some music scale related article and maybe link here. Deco 21:40, 19 June 2006 (UTC)

Prime factorization is in P

... not unknown as this article suggests. See prime numbers. User:144.135.3.174

What? This is either a joke ("prime factorization" meaning factorizing primes into their prime constituents) or nonsense. — ciphergoth 08:55, July 16, 2005 (UTC)
Primality testing is in P. This is probably what our anonymous friend meant. Prime factorization might be in P. Deco 21:41, 19 June 2006 (UTC)

Factorization record

Perhaps I'm being too pedantic, but the article says

As of 2005, the largest number factored using general-purpose methods...

Surely this is misleading? Large numbers that are p-smooth for small p can be factored in polynomial time, so should it be made clear that it is really the record for factoring semiprimes that is mentioned? Birkett 11:22, 21 May 2005 (UTC)

You're right. That edit was incorrect; I've reverted it. Decrypt3 16:39, May 21, 2005 (UTC)
My original thinking was that the clause "using general methods as part of public research" covered it for all numbers, because has anyone factored a larger number using general methods as part of public research? You could easily generate such a record, of course. Perhaps we need a better qualifier than "as part of public research"? — Matt Crypto 21:57, 22 May 2005 (UTC)
Now I'm thinking that maybe the "general-purpose methods" part should go. We use semiprimes as the gauge of factoring technology because they're the most "difficult", but in theory, the fact that they're semiprimes doesn't matter for general-purpose algorithms, by definition. I think larger numbers have been factored, but using specialized algorithms because they're not semiprimes. I think only the semiprimes are really relevant here because of their application in public-key crypto. So I think the general-purpose part is causing the problem. Decrypt3 08:09, May 23, 2005 (UTC)
I think some very large Fermat numbers have been factored with the SNFS, much much larger than the semiprime records. Deco 21:42, 19 June 2006 (UTC)

Merge of Prime factorization algorithm into article

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section.

The result was merge into Integer factorization. -- Schneau 21:55, 24 July 2007 (UTC)

I've placed a merger tag at Prime factorization algorithm, to gauge consensus as to whether that article should be merged into this one. A majority of that article is code samples (mostly of Trial division), and the remainder seems to be redundant to this more fleshed-out article. A redirect would be better pointing here rather than to something like Trial division or one of the other special cases/methods of factorization. Thoughts? --Kinu t/c 16:03, 9 April 2007 (UTC)

I suggest simply redirecting that article to here. It seems to consist only of a vague description of trial division and a lot of code snippets, none of which seem worth saving or merging. —David Eppstein 16:19, 9 April 2007 (UTC)
Agreed. There is very little of value in the Prime factorization algorithm article, and the topic is certainly redundant. --Macrakis 16:54, 9 April 2007 (UTC)
Also agreed. VectorPosse 18:18, 9 April 2007 (UTC)
Agree with that page's redirect to here. Baccyak4H (Yak!) 18:24, 9 April 2007 (UTC)
Agree with simply redirecting that article to here. Schneau 20:35, 3 July 2007 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

On section "Integer factorization in polynomial time"

I think the text in this section is far too sensational to be appropriate. Simply having a factoring algorithm that runs in polynomial time does not (necessarily) mean that "factoring almost as easy as primality testing". In particular, big-O notation hides constants which may turn out to be very large, and a polynomial runtime could still be large enough that for numbers of interest (eg, 1024 or 2048 bit RSA keys) the runtime is infeasible (eg, if factoring an integer n can be done in O(n^1000) time, that's polynomial, but that doesn't mean it's fast, or practical, or least of all "as easy as primality testing" - it just means that there exists a large enough number that this method would be faster than the GNFS; it doesn't give us any clue as to how big that number would have to be).

Not going to edit this myself since it seems making changes without an account is grounds for auto-revert around these parts...

Uhm - a polynomial time algorithm for factoring has been known since 2002 when it was discovered by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena [2]. The page should be updated to reflect this minor development :P. Now before people start running around saying "RSA is broken" let me emphasize that the algorithm is more than O(n^12) [(O(n^6)) if you want to make some assumptions based on conjectures] and has LARGE constant terms. Basically I threw together the algorithm as described in the paper and attempted to factor a 17-bit number (there is a minimum size for which it works). I interrupted the pogram after a couple minutes, and it was still computing he first bit in the first iteration. Even speeding up the algorithm by a factor of a couple million will not produce results better than the number seive for any numbers we normally use for RSA (not to mention the algorithm will become a memory hog for large numbers due to the last stage of the algorithm computing in (mod X^r − 1, n)) - i.e. there are r [r is proportional to log(n)] n-bit terms in the polynomial you are working with). Simply put, the algorithm is interesting due to it's polynomial time nature, but that alone does not make it practical - yet.

The algorithm you refer to is for primality testing, not factoring. This is a meaningful difference; no one has any idea how to construct a polynomial-time factoring algorithm, and many doubt it is possible. Ntsimp 16:53, 27 December 2006 (UTC)
Yes, there are primality testing algorithms that scale easily to numbers with many thousands of decimal digits, like the dominant Miller-Rabin primality test. Dcoetzee 22:05, 24 July 2007 (UTC)

What is known

It is unverifiable to say "no polynomial time algorithm is known". How can we tell? In particular, if any security organisation found such an algorithm, they would have an interest in keeping this knowledge secret. I'm planning to change "known" to "widely known" or "published" or some such wording to allow for the case that it is known but secret. Stephen B Streater 22:32, 30 June 2006 (UTC)

Fair enough. Deco 22:34, 30 June 2006 (UTC)
OK - I've done a few. Stephen B Streater 22:49, 30 June 2006 (UTC)
You're being pedantic. "X is not known" doesn't mean "all people are ignorant of X", which is unverifiable, as you say. "Not known" means "not known to the research community". If you start editing this here, you'll have to do it everywhere. --— Preceding unsigned comment added by Doradus (talkcontribs) 01:46, 1 October 2007 (UTC)

Precise formulation of the decision problem

I came here to find out what exactly is the prime factorization decision problem, and it's not stated here. Can someone knowledgeable add this? --Doradus 01:47, 1 October 2007 (UTC)

The precise formulation of the decision problem is simply, given an integer n and an integer k, does n have a divisor less than k? This is natural because it enables you to locate a factor is a logarithmic number of queries using binary search. I'll see where to put this. Dcoetzee 22:04, 1 October 2007 (UTC)
To nitpick... I guess that's logarithmic in the value of the integer, which actually makes it linear in the size of the integer. Is that right? --Doradus 18:33, 2 October 2007 (UTC)
That's right. Dcoetzee 18:51, 2 October 2007 (UTC)

Deleting unsourced paragraph

I'm deleting the following paragraph; I don't think factorization is known to be NP-hard, and a quick web search didn't reveal any sources for it. Please don't add it back without also providing a source. Cwitty 07:13, 14 July 2005 (UTC)

The problem of finding large prime factors of even larger composite numbers is an example of an NP-Complete problem. There is a mathematical proof showing that an algorithm to solve this problem which does not require polynomial time to complete would be a solution to all other NP-Complete problems, such as finding the shortest path through a maze.

Prime factorization is believed to be NP-Complete, but this has not been proven. It's possible that a polynomial-time factorization algorithm exists. If so, that would mean that factorization is not NP-Complete, and thus the algorithm could not be used to solve other NP-Complete problems. However, if an algorithm is developed that can solve an NP-Complete problem in polynomial time, then that same algorithm could also factor integers in polynomial time. Decrypt3 22:34, July 15, 2005 (UTC)

Actually, integer factorization is believed to be in NP, but not NP-complete. — ciphergoth 08:57, July 16, 2005 (UTC)
Further, a polynomial time algorithm for factorization would not show that it is not NP-Complete; it has not been proven that there is no polynomial time algorithm for NP-Complete problems. --65.6.24.115 21:04, 23 September 2005 (UTC)
Integer factorization is unlikely to be NP-hard. This would imply NP=coNP, a result widely believed to be false. The other replies by ciphergoth and 65.6.24.115 are correct. Deco 21:45, 19 June 2006 (UTC)
Integer factorization is in NP; this has been proved for long. But this does not mean that factorization can be done in polynomial-time (or that it absolutely cannot). It simply means that if it can be done in polynomial-time, then all NP problems can. Conversely, if it can't, then all other NP-complete problems cannot also (but NP problems other than NP-complete still might). —Preceding unsigned comment added by 85.139.77.54 (talk) 00:10, 18 December 2007 (UTC)


what would happen if someone actually found an efficient way to factor these?

what would happen? are there alternatives? —Preceding unsigned comment added by 193.136.166.125 (talk) 13:06, 14 March 2008 (UTC)

That's a good question, and there are cryptosystems not based on the hardness of factoring, but which would become dominant is pure speculation. Might be sources discussing this. Dcoetzee 03:31, 28 April 2008 (UTC)

Issues with formula

[3]. It doesn't give a link to Big O, it doesn't explain that exp is raising to e, and... I guess that's it. --68.161.148.207 (talk) 07:27, 27 April 2008 (UTC)

That second part is standard math notation, and big O notation is linked earlier in the article, but another link wouldn't hurt. Dcoetzee 04:02, 28 April 2008 (UTC)

Hardness/difficulty

(moved from my talk page - — ciphergoth)

Why do you insist on using the word "hardness" at integer factorization? Difficulty sounds much better and sounds more formal. —Mets501talk 22:18, 27 May 2006 (UTC)

I think hardness is the correct technical term in this context. You'd refer to "NP-hardness", not "NP-difficulty". — ciphergoth 12:07, 28 May 2006 (UTC)
I agree. Hardness evokes a formal sense of difficulty, as opposed to an informal "argh I can't do it" sense. There are plenty of proofs that factoring is equivalent in difficulty to tough problems that nobody can solve, much like NP-complete problems (but to a lesser degree). Deco 12:52, 28 May 2006 (UTC)
OK, no problem. We'll keep it at hardness. —Mets501talk 15:54, 28 May 2006 (UTC)
I do not know if hardness is the technical term or not, but it obviously stands out as grammatically awkward to the non-technical reader. Since encyclopedias are not designed to appeal to experts with a thorough background in technical jargon, the term should be highlighted as such. I'm not saying we should replace the technically accepted word for a merely more familiar one, but we should append an appositive phrase, i.e. "hardness, a technical term used in the mathematical and computing community to describe computational difficulty, ..." Further, if hardness is such a term, it should be in quotes or italics to denote that, and we should add a hyperlink to both a Wiktionary entry and a Wikipedia page with the same heading. [Hardness (the mathematical term describing computational difficulty) —Preceding unsigned comment added by 208.72.65.246 (talk) 02:25, 25 July 2008 (UTC)

Need Help with Citation

Hi All,

The original article used RSA-200 as an example with no reference. I added RSA-640 and have the reference (http://www.rsa.com/rsalabs/node.asp?id=2964). Unfortunately, I do not know how to add it. As wiki matures, the command set becomes more obscure.

Sorry, 151.196.13.207 (talk)noloader151.196.13.207 (talk) —Preceding undated comment was added at 04:43, 12 February 2009 (UTC).

Hi, mentioning it on the talk page, as you did, is good as well. :) The way to add references is to put it within <ref></ref> tags, although simply putting the link within square brackets should produce something passable as well.[4] More details are at Wikipedia:Citing sources, although I must admit the page might have been hard to find. Shreevatsa (talk) 05:02, 12 February 2009 (UTC)

Diffie-Hellman problem and integer factorization

I thought Diffie-Hellman required calculating logarithms on a finite field, not factorisation, or are the problems somehow related? User:Hagedis

Yes Diffie-Hellman is discrete logarithm. So is DSA. Discrete logarithm for integers modulo a prime appears to be related somehow it factorisation, but a factorisation breakthrough is not guaranteed to solve discrete logarithm. The current champion method of both problems is the Number Field Seive, and there are only minor differences between the version that does factors and the version that does discrete logarithm. Also Shor's algorithm (which works on proposed quantum computers: no-one knows if they can actually be built large enough to be useful) would blow both problems out of the water.
But there is no guarantee (as far as we know at the moment) that a breakthrough factoring algorithm would also do discrete logarithm. I'm not sure if a breakthrough discrete log method is guaranteed to do factoring as well. Several otherwise reputable people have claimed that it would, but i have never seen a proof, or a cite of one.
User:203.37.81.xxx
Yes, that's right. I wasn't thinking when I typed the list. I'll remove the DH/DSS entries. --LC


I think I got one property by which I can find prime number series, but in that series, we have to remove some number by using property which is used to make that series. If any one think that this is singnificant point please mail me at kgujarat@usc.edu.


To summarize the connection between factoring and discrete logs, Let n=pq where p, q are large distinct primes, and G is a cyclic multiplicative group order n. Let g be a generator of G.

If we have an efficient algorithm to factor n, we still cannot compute discrete logs in G to base g. Conversly, given an efficient algorithm to find discrete logs in G to base g, we still cannot factor n.

(I think this is the current state of affairs. Please correct me if I am wrong).

You're wrong. For starters, the multiplicative group modulus n=pq is not cyclic. It has no generators. Second, if you can do discrete log, then compute k such that (for instance) 2^k=1 (mod n). Then k divides the order of the group, which is (p-1)(q-1). If its a large submultiple, you can use that information to immediately factor n. If not, compute another discrete log, say, 3^k=1 (mod n). By trying different bases you're guaranteed to get to a useful value (the order of the group divided by a small integer). You can never get the exact order of the group because it has no generators; that is, the Carmichael function of n is strictly less than phi(n). —Preceding unsigned comment added by 85.139.77.54 (talk) 00:02, 18 December 2007 (UTC)

considerring reminders 4 common factorization test might help a lot: algo might become polynomial

also , possible dinamic programming n my so called virtual processing, in fact keeping reprezentative info 4 simulating large number of divindings where the dividers r 1...2^m thing that might b able to b considered 4 virtual processing aiding dynamic programming :)

— Preceding unsigned comment added by 93.118.212.31 (talk) 19:02, 24 July 2011 (UTC)

Arecibo message.

I dont understand "It can also be used to save storage, since any multiset of prime numbers can be stored without loss of information as its product; this was exploited, for example, by the Arecibo message." I see how prime factorization can save storage but I understood that Arecibo used a product of two primes so that extraterrestrial life forms would know to interpret it as a two dimensionally array, rather than to save storage? Billlion (talk) i cant under stand it eather —Preceding unsigned comment added by 162.40.202.93 (talk) 00:44, 16 September 2009 (UTC)

That certainly sounds dubious. Multiplying primes doesn't really save storage either.--Roentgenium111 (talk) 15:03, 16 November 2009 (UTC)
The number of bits need to express a number n is about the same as the number of bits needed to express the prime factors (including multiplicity) of n. That is not where the storage is saved. The Arecibo message was meant to be a two dimensional picture but for obvious reasons, it was transmitted (and will(?) be received) in a one dimensional manner. If a message containing a non-semiprime number of "pixels" was used, then the length ans width of the two dimensional image is ambiguous. To clarify the ambiguity, one could also send the correct dimensions (i.e. x and y such that x times y equals n). There are two problems with this approach. First, this will increase the size of the message, which is the point being made in this article. Second, there is no clear way to indicate the transmission of x and y are not part of the "picture" with which they were transmitted. I will remove the challenge templates if on one responds in about a month. Bender2k14 (talk) 15:24, 2 December 2009 (UTC)

Vacuously, you cannot represent an element in the space of n-bit integers with fewer than n-bits. I don't know how an idea like factoring saves space made it into the article. I have removed the offending sentence. Skippydo (talk) 17:01, 2 December 2009 (UTC)

I understand what they were trying to say. An pair of integers p, q can't be transmitted in just log p + log q bits, since you need to know where one terminates and the next begins. A typical space-efficient integer encoding (Elias delta coding) is to first encode the number of bits in the length in unary, then the number of bits in binary, then the bits themselves. This requires 1 + 2 lg lg p + lg p bits for an integer p (omitting ceiling/floor functions). Hence storing the product of two primes, pq, requires 1 + 2 lg lg pq + lg pq bits, or 2 lg (lg p + lg q) + lg p + lg q bits. Storing p and q separately with the same coding requires 1 + 2 lg lg p + lg p + 1 + 2 lg lg q + lg q bits. The difference, 2 [lg lg p + lg lg q - lg (lg p + lg q)] + 1, is positive for all p, q ≥ 2.
For example, if I take p=104729, q=128189, both primes takes 28 bits to represent (17 for the value, 6 to represent the value "17", and 7 more to give the length of 17 in unary). The product requires only 47 bits to represent, a savings of 9 bits.
This is not surprising, considering that we are removing information by constraining p, q to be prime and by turning an ordered pair into an unordered pair. In fact I've seen a proof that used a similar idea to show a weak form of the prime number theorem in a book on Kolmogorov complexity. That doesn't necessarily mean it should be in this article, but yes, representing the product of two primes takes less space then representing the two primes separately, using typical universal codes for integers. Dcoetzee 05:01, 25 July 2011 (UTC)

Article name

I was just wondering... Should this be at prime factorization? That term seems to be more common. -- Oliver P. 04:25, 7 January 2004 (UTC)

It's rather difficult to say...this article was named "integer factorization" because there are other types of factorization (you can factorize polynomials, for example). I'd say either is OK, so "integer factorization" should stand, I think. Decrypt3 01:05, 30 July 2004 (UTC)
Besides, prime factorization is certain to draw fire from Wikipedians pointing out that primes cannot be factorized.
Herbee 18:30, 18 February 2005 (UTC)
No, "prime factorization" is understood to mean decomposing integers into primes. It's a recognized term: [5]. Decrypt3 21:20, 18 February 2005 (UTC)
Granted, but being wrong never seems to inhibit do-gooders from flaming or "being bold". I think integer factorization is slightly better than prime factorization because it draws less fire. —Herbee 15:26, 23 May 2005 (UTC)
I've never seen the likes of Lenstra, Leyland, etc refer to "prime factorization" - "integer factorization" seems to be the standard term. — ciphergoth 08:54, 16 July 2005 (UTC)

general and special

Some sources call general-purpose and special-purpose algorithms category 1 and category 2, but I don't remember which is which, or have a reference at hand. Bubba73 You talkin' to me? 03:44, 20 March 2012 (UTC)

I gave Bressoud and Wagan as a reference, which was reverted. Here are two more:
I added it back. I looked up the sources and they check out. I've certainly never heard these terms before, but I don't believe these authors invented them. I also found a 1985 paper by Carl Pomerance referring to general-purpose algorithms as Kraitchik family [6] - in fact I think this paper coined the term (although it appears to be referring to just congruence of squares methods, at the present time these are more or less equivalent to general-purpose methods). Dcoetzee 04:21, 21 March 2012 (UTC)
OK. I thought it would be good to have some discussion of the categories in the section before the subsections of the different kinds. Bubba73 You talkin' to me? 16:38, 21 March 2012 (UTC)