Wikipedia:Reference desk/Archives/Science/2011 April 11

From Wikipedia, the free encyclopedia
Science desk
< April 10 << Mar | April | May >> April 12 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 11[edit]

Past oxygen[edit]

In the past, oxygen levels were at times much higher than they are today. One would think that at 22% higher levels of oxygen and forests covering most of the land, whole continents would go up in smoke. Was there anything back then that we do not have now that helped to stop the effects of forest fires? I just don't see how a fire in such conditions could ever be stopped. --T H F S W (T · C · E) 00:13, 11 April 2011 (UTC)[reply]

Fires probably would have started earlier in the season (before the plants were quite so dry), burning up most of the fuel, so later fires wouldn't be much bigger than now. StuRat (talk) 00:58, 11 April 2011 (UTC)[reply]
Actually, I have read that you did have huge forest fires in the carboniferous era. This is also mentioned in this BBC discovery program. Count Iblis (talk) 01:18, 11 April 2011 (UTC)[reply]

Today there is a great deal of forest fragmentation due to human habitation. If the forests of the period you're referring to were continuously dense across the land then certainly the fires would have burned until they ran out of fuel. The fires today are usually curtailed by forest fragmentation and heroic human efforts, but as sturat says, the damage to the forest would be somewhat minimised by frequent burning of the fuel. Very hot, destructive fires of the recent past were largely aggravated by misguided human protection causing a build up of fuel. Yes, the fires would have raged far and wide but forests would bounce right back almost immediately.190.56.17.78 (talk) 03:57, 11 April 2011 (UTC)[reply]

One other thought, the burning depends on what type of forest it is. Having experience with central american jungle, you almost can't make it burn if you want it to. If those forests were damp tropical jungle,then the fires would not have gone far.190.56.107.103 (talk) 04:07, 11 April 2011 (UTC)[reply]

Continuing that thought, and as a response to "the fires would have burned until they ran out of fuel", rain would also have stopped them. HiLo48 (talk) 05:10, 11 April 2011 (UTC)[reply]
Even in the modern world, some plants clearly require fire for their own life cycle; fire is a natural and normal thing, and some ecosystems go beyond tolerating it to actually depending on it. See Chapparal#The_chaparral_and_wildfires for one ecosystem for which regular burning is part of normal life. --Jayron32 05:36, 11 April 2011 (UTC)[reply]
Ancient oxygen levels were at times so high that even swamp plants, which would never ignite at current levels, evolved natural flame retardants. This is part of the evidence for high ancient levels. Nick Lane's Oxygen: The Molecule that Made the World, is a fun book with a lot on such questions, which could be of use at Geological history of oxygen. IIRC, don't remember the whole story, the frequent fires were part of an eventual paradoxical effect involving carbon sequestration that increased atmospheric oxygen levels even more.John Z (talk) 11:00, 13 April 2011 (UTC)[reply]

List of American northward-flowing rivers?[edit]

Does Wikipedia, or does the internet, have a list of rivers in North America that flow northward? I think some people like to say that the Red River of the North is the only one. But that seems moderatly far-fetched. Michael Hardy (talk) 02:02, 11 April 2011 (UTC)[reply]

OK, I may have found it:
Red River - Minnesota and Dakota.
St. John River - Jacksonville, FL.
Saginaw River - Michigan.
Shiawasee River - Michigan. (slightly easterly)
Kishwaukee River - Illinois.
Monongahela River - WVA and PA.
Youghiogheny River - WVA and PA.
Willamette River - Oregon.
Maumee River.
Little Big Horn River - Montana.
Jordan River - Utah
Mojave River - Southern California
Tennessee River - Kentucky-Tennessee
Cumberland River - Kentucky-Tennessee
Reese River (spring only) - Nevada
Michael Hardy (talk) 02:09, 11 April 2011 (UTC)[reply]
See Wikipedia:Reference desk/Archives/Language/2009 June 2#Downtown, uptown
and Wikipedia:Reference desk/Archives/Miscellaneous/2010 October 7#Rivers.
Wavelength (talk) 02:18, 11 April 2011 (UTC)[reply]

What might however be true is that the Red River of the North is the only significant waterway to flow from the US into Canada. Looie496 (talk) 02:20, 11 April 2011 (UTC)[reply]

The Saint Lawrence River meets both requirements. If you launch your boat from the south bank near Lake Ontario, you are unambiguously in the U.S. If you then proceed to the mouth of the river, you cannot help but pass into Canada. Also, the mouth of the Saint Lawrence is at a considerably more northern latitude than the source is. Also, the Saint John River has its source in Maine and flows into New Brunswick. --Jayron32 02:25, 11 April 2011 (UTC)[reply]
Surely all the rivers on Canada's and Alaska's northern shores flow generally northwards. Yes, I know they will be frozen often enough, but when they do flow, they flow north. HiLo48 (talk) 02:29, 11 April 2011 (UTC)[reply]
Indeed, see List of rivers of Canada--Shantavira|feed me 08:26, 11 April 2011 (UTC)[reply]

So could the Red River and its tributaries be the only ones originating in the USA that ultimately drain into Hudson's Bay? Michael Hardy (talk) 15:18, 12 April 2011 (UTC)[reply]

Not necessarily; the Great Lakes watershed appears to have outlets to Hudson's Bay via the Albany River, however the connections may be "man made", so I'm not sure if you wish to count that; but you can currently get from the Great Lakes to Hudson's Bay via water connections the whole way, meaning that any of the rivers from the U.S. side that drain into the Great Lakes would qualify as well. Before these connections were made, you'd probably be right regarding the Red River, but human intervention has changed how water flows substantially. --Jayron32 15:44, 12 April 2011 (UTC)[reply]

Double Recombination II[edit]

Hello. A wild-type female fruit fly mates with a yellow, chocolate, cut male (all traits recessive). The percentage of progeny of various types is as follows:

  • 40.53% cho, y+, ct+
  • 7.85% cho, y, ct+
  • 1.84% cho, y, ct
  • 0.05% cho, y+, ct
  • 0.03% cho+, y, ct+
  • 2.08% cho+, y+, ct+
  • 8.07% cho+, y+, ct
  • 39.55% cho+, y, ct

I determined that the ct gene is in the middle. Why is the distance between y and ct the sum of the single recombinant between the two genes and the double recombinant frequencies? Why must I add the double recombinant frequencies? I add the single recombinant frequencies because the distance between y and ct will affect the number of single recombinants. Why would the double recombinant frequencies affect the distance between ct and cho? Thanks in advance. --Mayfare (talk) 05:31, 11 April 2011 (UTC)[reply]

I should like back to Wikipedia:Reference_desk/Archives/Science/2011_March_26#Double_Recombination where these data were first examined.
The reason to add the double recombinant frequencies is that in a fly with two recombination events, the first recombination event doesn't "know" that you can measure the second. If you're measuring the distance between y and ct you want to tot up every fly with a recombination between the two, no matter what else happened, so that you have a total figure.
However, as I linked from there, there is a more meaningful calculation to make with the double recombinants: you want to make sure that they really are uncommon relative to what you think are the single recombinants. Because there's a chance your data could just be inaccurate - maybe you can't score flies right, maybe a gene reduces viability, maybe two genes crossed together reduce viability. In such a situation the recombination rates you measure between pairs of genes might put the genes in the wrong order, and only double recombinants will set you straight. Wnt (talk) 00:18, 12 April 2011 (UTC)[reply]

Photon travel time during stellar collision[edit]

I was reading an article last week about a recent gamma-ray burst, which was thought to be a star crashing into a galactic black hole. The Sun article says "...Therefore it takes a long time for radiation to reach the Sun's surface. Estimates of the photon travel time range between 10,000 and 170,000 years." Since the physical substance of the core is disrupted once the star passes the Roche_limit do we see that 10 to 170 thousand years of photon generation more-or-less all at once? Tdjewell (talk) 16:01, 11 April 2011 (UTC)[reply]

sure, more-or-less. Not only that, but the star matter also gets heated as it spirals down the accretion disk. Dauto (talk) 20:47, 11 April 2011 (UTC)[reply]
Resolved

This is the death of humans who eat nothing but rabbit, supposedly due to a lack of fat:

A) Do we have an article ?

B) Is this condition real ?

C) Is it really due to a lack of fat in the rabbits ?

D) Can't humans produce fat, like other animals ? StuRat (talk) 16:40, 11 April 2011 (UTC)[reply]

Rabbit starvation APL (talk) 16:47, 11 April 2011 (UTC)[reply]
It isn't just about fat: rabbit (or any meat) is also entirely lacking in carbohydrates. Humans can produce fat from carbohydrate, but not, I believe, from protein. I hadn't heard of this condition before, but I have little doubt that it is real -- it's pretty well known that it is dangerous for too high a fraction of the diet to consist of protein. Looie496 (talk) 17:55, 11 April 2011 (UTC)[reply]
Don't tell that to the Inuit. Matt Deres (talk) 18:02, 11 April 2011 (UTC)[reply]
Eh? Most of the meat eaten by the Inuit contains a lot of fat. You can get by on either fat or carbs, but you can't live long with neither of them. Looie496 (talk) 18:13, 11 April 2011 (UTC)[reply]
Indeed, which is why it is the lean-ness of rabbit that creates the problem. It's not that I'm necessarily disagreeing with with what you said; I'm just trying to point out that your point could be misconstrued. The OP asked if it was due to absence of fat - and that's exactly the problem. Matt Deres (talk) 18:24, 11 April 2011 (UTC)[reply]
Traditionally (back in the old days), at least in the South, easily obtainable wild game, like rabbit and squirrel, were often prepared in a vegetable based stew with a lot of starchy wild plants that would contain carbs. Quinn THUNDER 19:13, 11 April 2011 (UTC)[reply]
How much less fat does rabbit meat have than its nearest common competitor—perhaps chicken meat? I mean, rabbit meat contains some fat. Bus stop (talk) 19:15, 11 April 2011 (UTC)[reply]
I'm at work and cannot do much in the way of research, but that info is probably pretty easily available. I think the problem isn't the particular meat per se, it's the reliance on that meat. If you're living off of rabbits and other small game, it could be that that's practically all you're living on (consider trappers and furriers), but if you're eating chicken, you're pretty much assured of also having eggs in your diet - and eggs have fat. Matt Deres (talk) 20:09, 11 April 2011 (UTC)[reply]

Thanks, all. I've now added a redirect from "rabbit death". StuRat (talk) 19:22, 11 April 2011 (UTC)[reply]

"Rabbit Death" also brings to mind Elmer Fudd and the Bunny Man...but I guess no one technically died in those examples. Quinn THUNDER 20:45, 11 April 2011 (UTC)[reply]
What are you doing?! Rabbit death should redirect to the most foul, cruel, and bad-tempered rodent you ever set eyes on! Clarityfiend (talk) 04:16, 12 April 2011 (UTC)[reply]
I don't know the answer, but the catabolism of proteins produces acetyl-CoA which can be used by fatty acid synthase, so it's not true that fat cannot be made from proteins in humans. However, note that certain unsaturated fatty acids cannot be synthesized in humans but must be eaten (essential fatty acids). Icek (talk) 23:08, 11 April 2011 (UTC)[reply]
I find this: "Domestically produced rabbit meat contains less fat than other meats. Again, beginning with the rabbit we see only 10.2% fat per pound compared with chicken at 11.0%, turkey at 20.2%, veal at 14.0%, good beef at 28.0%, lamb comes in at 27.7% and once again pork has a whopping 45.0% fat per pound." [1] What I don't understand is that rabbit seems to have only slightly less fat than chicken. Bus stop (talk) 01:21, 12 April 2011 (UTC)[reply]
What I don't understand is what kind of measurement "% per pound" is. Apparently a three-pound pig will contain 135% fat? Where does it store the last 35%? –Henning Makholm (talk) 02:34, 12 April 2011 (UTC)[reply]
I would take that to be a way of saying "percent by weight", as opposed to "percent by volume" or "percent of calories". StuRat (talk) 07:11, 12 April 2011 (UTC)[reply]
The mammalian hemoglobins will eventually kill you. Stay away from red meat. Imagine Reason (talk) 16:11, 12 April 2011 (UTC)[reply]
Are you suggesting that Sturat is not a mammal? 86.164.75.102 (talk) 20:08, 15 April 2011 (UTC)[reply]

Thanks everyone, I will mark this resolved. StuRat (talk) 18:17, 15 April 2011 (UTC)[reply]

Photons[edit]

How can photons only travel at the speed of light if they have no mass what slows them down if there they are massless? why don’t photons how come photons gain mass as they are going the speed of light does E=mc2 not apply to photons as they are light? —Preceding unsigned comment added by 82.38.96.241 (talk) 22:31, 11 April 2011 (UTC)[reply]

There are two kinds of masses, rest mass and relativistic mass. The difference between rest mass and relativistic mass is the mass difference provided by E=mc2. What that means for photons is that they have no rest mass, but they do have a relativistic mass. This means that (in broad approximation), so long as they have kinetic energy (are moving) they have a mass and can interact with anything that has an effect on masses. Thus, photons can interact with other massive objects and undergo gravitational red shift. Photons can interact with matter (being absorbed and emitted by electrons as described in the Bohr model); this interaction with matter is what "slows" light down when it isn't in a vacuum. I'm sure I made some mistakes in presenting my best understood, laymen's approximation to reality. Someone who actually knows physics will be along shortly to tell you where I went wrong. --Jayron32 22:41, 11 April 2011 (UTC)[reply]
Photons are always moving at the speed of light. Photons either move at the speed of light, or do not exist. Sometimes, we describe a wave packet of light as an aggregate of many photons; and in the case of a significant interaction with certain types of matter, the speed of the wave packet is less than c. But the speed of any individual photon is always exactly c. The wave packet can be slower because of delays and statistical effects related to the emission and re-absorption of individual photons as they interact with atoms. For brief periods of time, a photon may be absorbed (at which point it ceases to exist as a photon, and the energy exists as some elevated state of the atom), and later, a new photon is re-emitted. Nimur (talk) 23:53, 11 April 2011 (UTC)[reply]
Once again, that's not right. There is no "emission and re-absorption of individual photons as they interact with atoms" when a photon moves through matter (as say, a photon moving through glass). If that happened, the photon would follow a random path and would exchange energy with matter changing its energy and frequency. That's What happens for instance to a photon as it finds its way out from the sum which can take tens of thousands of years and by the time it comes out it has completely thermalized (reached thermodynamic equilibrium with matter). That doesn't happen to a photon moving through glass. The photon that is moving through glass is indeed moving at a speed smaller than the speed of light. Dauto (talk) 01:09, 12 April 2011 (UTC)[reply]
Here's another description, published by Argonne National Lab: Photon Re-Emission. After all, these are merely theoretical models of actual physical processes. Which model is the best description depends on your specific experimental conditions. I'm very sure that emission and re-absorption is both a widely-used and very applicable model for photon/matter interaction. This is the approach described in several of the E&M and condensed-matter books I've read. A photon doesn't actually know it's in the vicinity of an atom (or inside glass) until it interacts with an atom; so it is by definition either scattered or absorbed/re-emitted. I have seen this approach for photon-matter interaction formalized using compton scattering, raman scattering, and with classical electrodynamic interactions in plasmas. The frequency is changed; dispersion is by definition a non-linear optical phenomenon. Statistically aggregated over a large number of photons, the result is a distortion of the wave packet envelope; equally well described as a frequency-dependent index of refraction, a dispersive medium, and so on. There are surely circumstances where this formulation of statistical photon absorption/re-emission doesn't apply very well, but I disagree with Dauto's assertion that it's "wrong." Nimur (talk) 02:32, 12 April 2011 (UTC)[reply]
A quantum mechanical description of photons requires you account for the small probability that the photon is absorbed / reemitted from each atom. This causes the photon's wavefunction to evolve, which affects the probabilities of observing the photon at later points. This is true even though in general you don't see the photon being absorbed and the actual probability of it being absorbed on any specific atom is quite low. More generally, one should keep in mind that ALL photon interactions in quantum mechanics can be understood as some flavor of absorption / reemission event (even scattering events get described this way if you look close enough). Quantum mechanical photons really don't do anything else.
Personally though, I consider using absorption and reemission to describe the propagation of light through matter to be something of a red herring. The retardation of light moving through a dielectric (such as glass) can be understood in purely classical terms without ever resorting to quantum mechanics or absorption / reemission. An electromagnetic plane wave impinging on a dielectric excites a counter oscillation in material. Because this involves accelerating charges, the counter oscillation will emit electromagnetic radiation. In the steady state, the superposition of the incident and excited radiation leads to a wavefront whose apparent velocity is lower than the velocity of light in free space. All of this can be worked out in detail and you get essentially the right answer using mesoscale properties and classical electromagnetism without the need to consider either photons or atoms. So, yes, the quantum mechanical description is correct, but using that formulation actually tends to obscure the underlying process since you tend to overlook the counter oscillation excited in the dielectric. Dragons flight (talk) 04:35, 12 April 2011 (UTC)[reply]
Yes, that is the correct description. Now, some phenomena do require quantum mechanics for a proper description in which case the wave you described is quantized, and the counter oscillations you described are included in the quantization. The whole thing becomes the photon that propagates down the medium without interaction. (Unless it does interact in which case the photon is scattered or absorbed which is what happens in the example I gave of a photon inside the sun). That photon includes the oscillation of the electromagnetic field as well as the oscillation of the medium itself, all within one particle. Sometimes people call that a quasi particle but that's just nomenclature. Physically, there is no difference between a "real particle" and a "quasi particle". Dauto (talk) 05:33, 12 April 2011 (UTC)[reply]
Right; and, physically, there is no difference between the first photon that got absorbed and the one that gets re-emitted after a delay; they are identical particles, and for purposes of nomenclature, you can call them "the same particle." There's no meaningful way to determine experimentally whether one single photon got slowed down, or if a new photon was created after the first one got absorbed/destroyed. Individual photons have no uniquely-identifiable features. Nimur (talk) 14:34, 12 April 2011 (UTC)[reply]
Yes, and a photon moving through vacuum also gets "absorbed" and "reemited" by vacuum quantum fluctuations (see vacuum polarization) and the final photon is indistinguishable from the original one. Nobody has a problem with identifying the final photon with the initial photon in this situation and the consistent thing to do is to identify the final photon with the initial photon for the case of a photon moving through matter as well unless the photon changes somehow, becoming distinguishable from the original one, which happens when it gets scattered or actually is absorbed and reemited. Dauto (talk) 15:11, 12 April 2011 (UTC)[reply]