Wikipedia:Reference desk/Archives/Computing/2008 October 18

From Wikipedia, the free encyclopedia
Computing desk
< October 17 << Sep | October | Nov >> October 19 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 18[edit]

protocol[edit]

Which protocol is used to connect the web server on the internet? tcp/ip or http? i have to choose only one option from the choice. —Preceding unsigned comment added by Dahalrk (talkcontribs) 03:54, 18 October 2008 (UTC)[reply]

We do not do your homework, and either that question is malformed or you've interpreted it incorrectly. -- Consumed Crustacean (talk) 04:09, 18 October 2008 (UTC)[reply]
Both, honestly. If I had a question phrased exactly like that, I'd choose HTTP, although both are correct.--Account created to post on Reference Desk (talk) 04:16, 18 October 2008 (UTC)[reply]
I am humbled at the breadth and simplicity of this explaination. I could not have done better:
[1] 99.185.0.29 (talk) 06:28, 18 October 2008 (UTC)--[reply]

Computer language difficulty[edit]

Hi. This is, frankly, a very trivial question. But I was wondering how one might rank computer languages by difficulty? There are too many to learn, but I'd rank the languages I'm familiar with like so:

  1. Assembly
  2. C++
  3. Java
  4. Visual Basic.NET
  5. ActionScript
  6. JavaScript
  7. HTML

So, I don't know much about PHP, COBOL, PostScript, SQL, or Perl. How would they fit in the list and also, do I have the list ordered correctly? I heard, by the way, that PostScript was harder than C, but only know a smattering of both languages, so I can't say.--75.71.127.230 (talk) 04:27, 18 October 2008 (UTC)[reply]

My 2 cents: of the languages I'm familiar with, starting with the easiest to learn/use: 6, 3, 2, 1. (Of course, if you're looking for something easier than any of these, check out Python, and I would only use javascript if you're writing a webpage.) HTML is not a "language" in the sense of a programming language, rather it's just a markup language-- that is, a series of formatting tags, so it doesn't really fit in with the rest of your list. Also, SQL is for database management and isn't really a general purpose language. Finally, just don't use COBOL, or you'll be sorry. Dar-Ape 04:54, 18 October 2008 (UTC)[reply]
What you've really done here is rank them in order of how close they are to human language on the bottom side and machine language on the top side. PHP is basically the same as Actionscript for your purposes. SQL is sort of its own beast—a query language and not a programming language—but again closer to natural human languages than machine languages (e.g. Assembly, bytecode, etc.). --98.217.8.46 (talk) 08:38, 18 October 2008 (UTC)[reply]
Difficulty is something I find hard to rate. I tried teaching three languages concurrently - VB.Net, C# and Java - and expected VB.Net to be the easiest to learn. But the students who tried learning all three tended to argue for Java. I think 98.217.8.46 makes a good point, but I would add a second requirement for ease of learning: consistency. If the language has a high degree of internal consistency in the syntax (for example, if you look at Java's history you see gradual improvement, in that many early methods were renamed to be more consistent across the language as a whole) it will be easier to learn and easier to program in. A third axis would be applicability: while I could write a text parser in both Perl and Java, I'd find Perl much easier. But if I wanted to create a graphical app, I could do it in Perl, but I'd choose Java. you can do either job in either language, but it is more difficult to do some jobs in some languages than in others. - Bilby (talk) 08:56, 18 October 2008 (UTC)[reply]

Heat sink[edit]

Under my laptop's heat sink, there is a square of metal foil sandwiched between copper heat pipe and the processor die. It was originally coated with a black, carbon-like substance (although not powdery), and was lightly adhered to the surface of the heat pipe. So, what is this piece of metal used for, if it is useful at all, and is it safe to not use it and just have the die touching the heat sink (with thermal paste, of course)? I can't really put it back on as it does not stick any more, and isn't it better to have one less barrier to heat anyways? Here's a picture for reference: CPU, foil, and heatsink - WikiY Talk 05:04, 18 October 2008 (UTC)[reply]

I would suggest that you get some thermal compound, and put a drop the size of a grain of rice on the copper part, and a drop about 1/2 that size on the CPU, and run the edge of a plastic card over the peice of foil over, and over until its as flat as you can get it. STICKING has little to do with it. You are attaching a peice of metal to a peice of glass. The glass is both extrodinarly expensive, and extrodinarly delicate. The foil was added during macfacturing to avoid breaking or chipping CPUs ( Glass ). The pressure of the screws on the heat pipe/heat sink would be engineered to have that peice of foil inside the gap. ( as much as 15 to 30 mills can radically change the pressure exerted on the glass, and if it makes the gap larger, it will lessen the extent to which the thermal compound can transfer heat ).
Note: I have done this HUNDREDS of times. Be very carefull. ( Or not, if you might like to try and upgraded CPU ). 99.185.0.29 (talk) 06:35, 18 October 2008 (UTC)[reply]
And great picture!
OK, so it's "Heat Sink → Thermal Paste → Foil → Thermal Paste → CPU", right? WikiY Talk 03:36, 19 October 2008 (UTC),[reply]

ya! and the Thermal paste is thin thin thin...03:26, 20 October 2008 (UTC)

Power Supply, Part 2[edit]

Good evening. This is the sequel to my previous question, "Power Supply." I recently updated my graphics card, and to make the graphics card work, I also upgraded to a 450W power supply. And it worked great...for about 30 minutes. Then, out of nowhere, my computer simply turned off. I was in the middle of something and boom, it was off. And I couldn't turn it back on. I had to put the old power supply and the old graphics card BACK in, just so I could get my computer to turn on. And if you think that's easy or fun, it's not. So my question is, uh, what the hell happened? Why did my computer spontaneously turn off like that? If it helps, the power supply I'm using now (the good one) has a green light on the back of it. The other power supply (the bad one) has a light on it that, when I first installed it, was orange, and after my computer shut off, it was red. Digger3000 (talk) 07:18, 18 October 2008 (UTC)[reply]

Probably just a bad power supply. Does it work fine if you just use it with the original video card? If not, then that's more than likely it. See if you can return it or RMA it. On an older computer purchase I had two of the things blow up (one with much smoke) before deciding to buy a slightly nicer brand; it does happen occasionally even with the good brands though.
The manual might tell you what the red light means, but it sounds like it's just there to indicate failure. Some sites also suggest shorting the ATX connector to start the power supply to see if it starts at all apart from the computer, if you're adventurous [2] -- Consumed Crustacean (talk) 07:39, 18 October 2008 (UTC)[reply]
RMA. Immediatly return the new powersupply for a Return Materials Authorization (RMA). We still need to know a few more things about the system, i.e. how much ram, how many drives, what kind of video card. —Preceding unsigned comment added by 99.185.0.29 (talk) 12:34, 18 October 2008 (UTC)[reply]
In any case, please do not open the Power Supply Unit without expert advice. Kushal (talk) 12:36, 18 October 2008 (UTC)[reply]
I wasn't suggesting that with the shorting thing. You definitely never want to try and repair a PSU unless you actually know what you're doing. The capacitors store a dangerous amount of charge. -- Consumed Crustacean (talk) 23:40, 18 October 2008 (UTC)[reply]
I am not blaming you for anything, Consumed Crustacean. I was just forewarning the OP because I thought that would be my first instinct if I did not know of the dangers of the PSU. Kushal (talk) 01:18, 19 October 2008 (UTC)[reply]

Mirror my Routers[edit]

Hey guys, i have two routers at home(both different makes) and i would like to set them up to be exact morror of each other so that as far as my computers are concerned they are they same thing, thus i wouldnt have any sort of drop out when i moved between them etc. Is this even possible? and if it is do you guys know of any resources that may help me get it set up?Vagery (talk) 07:22, 18 October 2008 (UTC)[reply]

A feature called hot standby routing can work with Cisco routers. It lets the default route be available all the time by switching mac address to a working one. The chances are that your boxes do not support this. Graeme Bartlett (talk) 20:15, 18 October 2008 (UTC)[reply]
If one of the routers can be used as a wireless repeater, that would basically do what you want. A repeater is simply a device which extends the range of some network. Belisarius (talk) 06:10, 19 October 2008 (UTC)[reply]
Daisy chain them, set the second one up as the next IP from the router, and turn off its DHCP, and set the DNS settings to be identical, i.e. router, netmask, DNS are all the same. The only problem is if your in the second zone, and you boot up to get a IP address. It does not work sometimes on my network, so I refresh the network adapter, and it comes on about 50% of the time. —Preceding unsigned comment added by 99.185.0.29 (talk) 03:25, 20 October 2008 (UTC)[reply]

security expert[edit]

how to become a network securtiy expert? —Preceding unsigned comment added by Atulchamola (talkcontribs) 08:57, 18 October 2008 (UTC)[reply]

We can't make you an expert here, but you may want to read articles on network security, firewall computer virus malware Hacker (computer security) intrusion detection Intrusion detection system Intrusion-prevention system Cryptography and write us articles for network admission control and network hardening. Graeme Bartlett (talk) 20:23, 18 October 2008 (UTC)[reply]
Did you mean to ask how you can demonstrate your expertise as a network security expert? Kushal (talk) 01:16, 19 October 2008 (UTC)[reply]
A number of colleges offer Masters courses in computer security, network security, information security, or with similar titles. If you've already studied computing or similar at undergraduate level, a degree such as this would provide a good way of gaining knowledge and experience and showing a serious interest. I don't know where you are, how old you are, or what level experience you have, so it's hard to be too precise, but you should be able to find advice from school or college careers services, from a directory of graduate courses, or from the internet.--Maltelauridsbrigge (talk) 17:22, 20 October 2008 (UTC)[reply]

Neo Office Help (Question Change)[edit]

How do I get rid of an unwanted page? A blank page has appeared in the middle of my document after I made a page break and I don't want it there. Regards!--ChokinBako (talk) 13:14, 18 October 2008 (UTC)[reply]

In "Options" switch on "Show all formatting marks" to make the page break visible. Then just highlight it and delete it. As for the cause of the blank page, I would guess either the paragraph immedialtely after the page break is formatted with "Page break before" or you hit "Blank Page" instead of "Page Break" on the Insert menu - "Blank Page" actually inserts two page breaks ! Gandalf61 (talk) 13:24, 18 October 2008 (UTC)[reply]
Thanks, but I don't want to get rid of the page break - it took me over an hour to write the page - just the blank page.--ChokinBako (talk) 13:45, 18 October 2008 (UTC)[reply]
The page break is simply a marker that tells the word processor to start a new page at this point. You shouldn't lose any of your text if you delete it. To find out how page breaks work, open Help and look for a section called "Add or a delete a page" or similar. Save your document before trying anything, and that way you have a copy to go back to if you make a mistake. Gandalf61 (talk) 14:12, 18 October 2008 (UTC)[reply]
Thanks. Actually, all I did in the end was type a word on the blank page, then delete all the (empty) lines behind it. That worked.--ChokinBako (talk) 15:14, 18 October 2008 (UTC)[reply]

Semantic web[edit]

Hi! I'm learning about the semantic web and have a few doubts:

  1. Is XML used in conjunction with RDF or are they alternatives of the same thing??
  2. And, is there any difference between XML and XHTML?
  3. Also, are all the above, 'microformats'?
  4. Lastly, does 'metadata' refer to any semantic component used to describe data in general, or is it something specific?

—Preceding unsigned comment added by 116.68.77.73 (talk) 11:43, 18 October 2008 (UTC)[reply]

2) Yes there's a difference. XML is a specification that describes a good way to store and communicate data whose structure can be represented by a tree. In itself it says nothing about what the data means. XHTML is an application of XML to represent web documents - i.e. it defines a language conforming to XML and assigns meaning to the bits of the language. —Preceding unsigned comment added by 78.86.164.115 (talk) 13:45, 18 October 2008 (UTC)[reply]
1) My understanding is that RDF is a "data model" for describing certain types of information such that you can make use of them in logical ways. One of the ways of writing it is using XML. Roughly, RDF describes what you are writing, XML how.
2) As stated above, XML is a general-purpose notation for machine-readable data. XHTML, on the other hand, is a format for web-pages; specifically, it's a version of HTML that conforms to XML syntax, and is therefore easier to parse and define than older HTML. It can also be more easily combined with other formats which also use XML syntax.
3) No, microformats are a specific way of using existing markup (from HTML / XHTML) to encode additional information - generally, to add semantic information to existing content. So whereas <b>IMSoP</b> just means 'display the text "IMSoP" in bold', and a special XML format might use <user name="IMSoP" />, a microformat might combine the 2 as <span class="user"><b class="name">IMSoP</b></span>. This can be read both by a tool for displaying HTML, and one for spotting usernames in the document.
4) Yes, metadata is a very broad term, which basically means "data about data"; so it's very much dependent on context, but your definition seems as good as any.
Hope that helps! - IMSoP (talk) 16:00, 18 October 2008 (UTC)[reply]
Just want to point out that XML is specifically a markup language, i.e. a way of describing "text with attributes". It does get used a lot for other types of data, but it wasn't designed for that and doesn't do a very good job of it. -- BenRG (talk) 20:29, 18 October 2008 (UTC)[reply]

Thanks everyone! That helped a lot......... So, does XML, sort of, replace HTML??—Preceding unsigned comment added by 116.68.77.73 (talk) 14:02, 19 October 2008 (UTC)[reply]

Nope, XHTML sort of replaces HTML, and is itself a kind of XML. XML has largely become the standard format for any kind of machine-readable text markup or data serialization, replacing older standards such as SGML (of which HTML is an "application", and XML a much stricter subset) and EDIFACT. HTML never did this job.
To balance BenRG's comments a bit though, I would clarify that its use for serialization is controversial, rather than accepted as a bad idea. While XML's certainly not perfect, I personally much prefer working with XML formats than, for instance, EDIFACT ones (my job in the travel industry involves both). - IMSoP (talk) 17:26, 19 October 2008 (UTC)[reply]

Thanks..Well, I have doubts again. The semantic web article and semantic web stack picture on wikipedia and another site I visited, gave me the impression that XML is one layer and you use an RDF-based language on top of it, ie., both are always used together. But some other articles I read, said that, you may use RDF/XML which is RDF-based and also incorporates XML , but it's not necessary that XML and RDF be used together. I might have misunderstood things here, but I'm still confused about what EXACTLY XML and RDF do. From my understanding, XML is used to give metadata 'tags', separately(while HTML takes care of formatting, etc.) and RDF is a specification, which is used INSTEAD of XML to convey metadata & RDF/XML is an RDF-based language that also uses elements of XML, ie. , RDF maybe used using XML, but not necessarily. Can you tell me if, and where I am wrong?? —Preceding unsigned comment added by 116.68.77.73 (talk) 08:16, 21 October 2008 (UTC)[reply]

Font is different on wikipedia only[edit]

I posted this question in the help desk, but received no replies - I assume this means I'm having a computer issue and not a wikipedia issue.

So I must have hit some key combination on my machine while viewing wikipedia (I am using windows xp, and browsing using firefox 3.0.1), and now the font on wikipedia is screwy. I don't know if it is just bigger (ctrl- doesn't make it look right), or if it is a different font altogether, but I find it really distracting. I took a screenshot, and posted it here: http://img201.imageshack.us/my.php?image=wikifontkn1.jpg

I find this font on wikipedia to be really ugly and distracting. I didn't realize how much the font mattered before now. Any help would be appreciated. Man It's So Loud In Here (talk) 15:13, 18 October 2008 (UTC)[reply]

Looking at that screenshot, I think you have just "zoomed in" - Firefox now does full page zooming, not just text, and remembers it per website, so this would make sense. Try hitting Ctrl-0, which should reset the zoom (or View -> Zoom -> Reset in the menu bar). - IMSoP (talk) 15:33, 18 October 2008 (UTC)[reply]
If Loud is accustomed to reading in a more graceful font like Lucida (plug!), it's natural to be distressed to find it replaced by Arial. —Tamfang (talk) 16:30, 18 October 2008 (UTC)[reply]
Thanks a lot, Ctrl-0 worked great!! Tamfang are you saying I could somehow adjust my settings to replace the font I see when I view Wikipedia or is this something that the website selects on its own? Man It's So Loud In Here (talk) 05:13, 19 October 2008 (UTC)[reply]
In Firefox you can set a default font. (In the Mac version this is in the Content panel of Preferences.) Wikipedia's "MonoBook" skin apparently does not specify a font, so I see my default. —Tamfang (talk) 20:22, 19 October 2008 (UTC)[reply]
What is "full page zooming"? —Tamfang (talk) 20:22, 19 October 2008 (UTC)[reply]
Both Firefox 3 and IE 7 "zoom" everything on the page, including images, text set to a specific size, etc, rather than just changing the definition of relative text sizes on the page, as older browsers would. You can see in the screenshot that the logo still looks in proportion to the text, but a bit "stretched" where Firefox has scaled it up. - IMSoP (talk) 21:53, 19 October 2008 (UTC)[reply]
PS: Why are we whispering?

Wireless Connection Troubleshooting[edit]

Hi...I have two laptops(both compaq model) and both are wireless enabled...one supports a,b,g networks and the other supports only b and g networks. When I connect them both in the ad-hoc mode I get a very weak signal(it shows low signal even when they both are in the same room 5m apart) and when I shift one of the laptops to another room it disconnects. Also I get a very slow transfer speed(even though it shows 54Mbps).Both the computers have the latest drivers.One's OS is XP and the other has Vista. They both used to work fine with other computers. What can be the problem?? Please help... —Preceding unsigned comment added by Piyushbehera25 (talkcontribs) 15:27, 18 October 2008 (UTC)[reply]

Domain problem[edit]

I've just got some new webspace and I've run across an odd problem. I've just installed MediaWiki and due to something to do with PHP5, I put this into .htaccess:

AddHandler application/x-httpd-php5 .php

However, when I view the wiki at http://www.foodomain.com/wiki, it asks me if I want to download the file, but when I view it at http://foodomain.com/wiki, it works normally. Why is this, and how can I fix it? x42bn6 Talk Mess 18:47, 18 October 2008 (UTC)[reply]

I've run into this problem many times when setting up Apache with PHP, and clearing the browser cache has always worked. Try it. --wj32 t/c 07:54, 19 October 2008 (UTC)[reply]

Performance Comparison between single core and dual core processors[edit]

I am thinking of purchasing a new computer but would like some clarification regarding the performance attributes of dual core and single core processor machines. Specifically, I would like to know which between the 2 of the following options gives me a better computing machine: a single processor intel machine running at 3.0Ghz and a dual core machine running at 1.8Ghz. I understand that a dual core machine would perform better with specific reference to multitasking tasks, but apart from that is there any further advantage to be gained from multicore processor machines? Thanks. ' —Preceding unsigned comment added by 196.1.26.35 (talk) 22:35, 18 October 2008 (UTC) [reply]

We can't exactly tell you which is better because you haven't told us which processors you're referring to. Clockrate isn't the only important thing when it comes to CPU performance, and energy use / heat is also a factor.
That aside, the benefit to be had from multicore processors is indeed in running more than one thing at once. Even if an application can't do multi-threading at all (and many these days have some degree of it), a multicore CPU may provide some benefit; it will have more of a core to itself as other applications, drivers, and whatnot will be running on the other one(s). -- Consumed Crustacean (talk) 23:52, 18 October 2008 (UTC)[reply]

Thanks. It's in relation to Intel based processors. Also, some types of applications such as PC games tend to have some hefty speed requirements for PCs such as requiring a minimum speed of say 2.4Ghz and the like. Now would such an application be able to run on say a dual core Intel based 1.8Ghz dual core processor? Or would that not meet it's specific speed requirements? Thanks in advance. Another follow up question to the above is nowadays there's a lot of talk about core 2 duo processors and quadcore processors. What's the difference between the 2 types? I thought quadcore means 4 processors built into the core, but in some texts and ads I've seen so far it appears that core 2 duo may also imply the same? If so, why the different terminology? —Preceding unsigned comment added by 196.1.26.35 (talk) 02:30, 19 October 2008 (UTC)[reply]

I'm guessing that the processors you're talking about are a 3.0 GHz Pentium 4 (P4) and a 1.8 GHz Core 2 Duo (C2D) or Pentium Dual Core (PDC). The P4 is an old design (about 3-4 years old, most likely), while the C2D and PDC are based on the Intel Core 2 design, which is much newer and higher performing. I won't get into specifics, but Core 2 based processors are better than much higher clocked P4s. Since you're talking about a single core P4 and a dual core C2D/PDC, the C2D/PDC (even though it has a lower clock speed) will probably be the better choice. Also, the P4 would use a lot more power, so you'll save some money on electricity if you choose the Core 2 based processor.
For the game requirements, does it say that it needs a 2.4 GHz Pentium 4? If that's what it says, the program can probably run okay on a 1.8 GHz Core 2 (you might have to turn the quality down a bit).
As far as the difference between dual and quad cores, the Core 2 Duo has 2 cores (hence the "Duo") while the Core 2 Quad has 4 cores. If you're running a lot of programs at once, the more cores you have the better, since each program can run on a different core. At this point in time, however, there aren't many programs that can take advantage of multiple cores, so unless you regularly run many programs at once (as in at least 8), I'd go with the dual core. -- Imperator3733 (talk) 03:24, 19 October 2008 (UTC)[reply]
The problem with the Pentium 4 design isn't that it's old. Core 2 has more in common architecturally with the Pentium Pro than with the Pentium 4. The problem with Pentium 4 is that it was designed for high clock rates and not high performance as such, because Intel was at the time still marketing based on clock rate. With Core and Core 2 they've given up on that. A Core 2 at 1.8 GHz will probably outperform a Pentium 4 at 3 GHz even with one core tied behind its back, because it does more with each cycle. I'm not sure people appreciate how much the practical meaning of "one cycle" has changed over the years. The original 8086 needed 70–120 cycles to do a single 16-bit multiply; current x86 processors can do several 32- or 64-bit multiplies per cycle.
That said, are you sure a faster CPU is what you really care about? Intel has spent an enormous amount of money convincing people that the CPU is the most important part of a computer, but that doesn't make it true. A faster CPU will only make your computer faster if your CPU usage is often at 100%, and not necessarily even then (since time spent waiting for main memory is counted as CPU time). Even the slowest CPUs are fast enough for most things these days. I'd advise most people to buy a cheap CPU and spend the money on something more useful, like more RAM or a larger screen or a better keyboard and mouse. -- BenRG (talk) 12:33, 19 October 2008 (UTC)[reply]

Thanks so much guys for your quick and informative responses. I really appreciate it. One other thing on the same discussion. I read somewhere that Dual core (pentium processors) are or act as virtual 64 bit machines whereas Core 2 duo function as true 64 bit processing machines, and that one of the major differences between the dual core/core 2 duo and older Pentium 4 processors is their support for 64 bit processing as opposed to 32 bit processing (supported by single core Pentium 4 processors). How true is this? And , what is the difference really between virtual 64 bit and true 64 bit processing? —Preceding unsigned comment added by 196.1.26.35 (talk) 00:35, 21 October 2008 (UTC)[reply]

pow() in c[edit]

i was wondering, and not been able to find anywhere, how does c actually compute pow()? i heard the trig functions are done in look up tables, does it somehow use a look up table, or some sort of expansion or just a giant for loop multiplying the base by itself n times? I've been trying to find out why its a "bad" function--82.16.140.152 (talk) 23:23, 18 October 2008 (UTC)[reply]

Implementations details can vary, but one approach would be to define pow() in terms of natural logarithm and exponential functions - pow(a,b) = exp(b*log(a)). Special cases might be included to handle negative base with integer exponent or zero base with positive exponent. Pow() is not a bad function, but it performs a non-trivial calculation to get its result. It has its uses, but it can be used inefficiently such as in the following hypotenuse calculation: pow(pow(a,2.0)+pow(b,2.0),0.5). Using pow() to obtain the square of a number is likely much less efficient that simply calculating a*a. Similarly, using pow() to calculate the square root might be less efficient that using the sqrt() function directly, since sqrt() might be better optimized for this purpose. Of course, a good implementation might detect these special cases and provide an efficient calculation, but this not guaranteed. -- Tcncv (talk) 00:34, 19 October 2008 (UTC)[reply]
I know nothing about c's implementation of pow(), but in general, if you want to apply a large integer exponent to a number, there is a much faster way of doing it than just blindly multiplying the base n times. It's based on the observation that squaring a number can significantly reduce the number of multiplications necessary. For instance, say you wanted to calculate 2256. The naive method would make 255 multiplications, but you can get the answer much quicker using squaring: just square 2 eight times (that is, square 2 to get 4, square 4 to get 16, etc.). That's significantly faster, using only 8 multiplications instead of 255. This is called exponentiation by squaring, and is highly useful in cryptography (in RSA, the exponent part of the public key is often chosen to be 65537, meaning you have to routinely take something to the 65537th power, modded with some other number). The article on the technique gives another example: to calculate x1,000,000,000 you need to make 999,999,999 multiplications using the naive method, but only 44 using exponentiation by squaring. That's what I call an improvement!
As I said, I have no idea whether pow() does this, I just wanted to supply some general context on exponentiation algorithms :) Belisarius (talk) 04:36, 19 October 2008 (UTC)[reply]
You can always grab the source of GNU libc and see how they do it. It probably varies by platform though. --71.106.183.17 (talk) 05:03, 19 October 2008 (UTC)[reply]
It's probably something like "fpow fp0, fp1" -- ie. issue a machine-code instruction and let the CPU worry about the details. --Carnildo (talk) 23:40, 20 October 2008 (UTC)[reply]