Jump to content

Wikipedia:Reference desk/Archives/Science/2014 June 5

From Wikipedia, the free encyclopedia
Science desk
< June 4 << May | June | Jul >> June 6 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 5

[edit]

alternating current

[edit]
Sketch of alternating current flowing from a generator at left to a load at right, such as an incandescent light. Black dots represent electrons. 84.209.89.214 (talk) 00:43, 5 June 2014 (UTC)[reply]

I don't understand how alternating current works without turning off and on and off and on whatever it is powering. Direct current makes sense, but alternating current doesn't. How doesn't it make the lights flicker constantly? — Preceding unsigned comment added by 190.74.66.93 (talk) 23:02, 4 June 2014 (UTC)[reply]

As a matter of fact it does, but they flicker too fast for your eyes to see. 24.5.122.13 (talk) 23:10, 4 June 2014 (UTC)[reply]
According to Alternating current, the lights can flicker if the frequency is too low. Normally it's about 60 cycles per second, which is right at the border of where it can be noticed. ←Baseball Bugs What's up, Doc? carrots23:13, 4 June 2014 (UTC)[reply]
In an incandescent bulb the light is generated when the filament heats up because of (essentially) friction between moving electrons and the wire (this is called electrical resistance, but the process is not different enough from rubbing your hands together really fast to make much of a difference). Here's the deal: if you rub your hands really fast in one direction along, say, the carpet, they get hot. This is akin to direct current. If instead, you rub your hands back and forth rapidly across the carpet, they also get hot. This is akin to alternating current. The filament of the light bulb heats up regardless of which direction the electrons are moving, so the light bulb filament gets hot enough to glow regardless of whether the electrons are moving left, right, or both ways in rapid succession. --Jayron32 23:55, 4 June 2014 (UTC)[reply]

How do the electrons travel along the wire from the power station to the light bulb if they are always changing direction? Like a person taking one step forward and one step back, they wouldn't get anywhere. — Preceding unsigned comment added by 190.74.66.93 (talk) 23:59, 4 June 2014 (UTC)[reply]

They don't travel along the wire. They shake back and forth in place. --Jayron32 00:23, 5 June 2014 (UTC)[reply]
Well, yes and no. A electrical signal travels at approximately ⅔ c. That's about 200,000 km/sec, or a wavelength of 4,000km for a 50Hz source. However, with the hydraulic analogy, imagine a double-ended piston as the source, it is connected via long pipe to a paddle wheel. As the piston is pulled in and out, the wheel will turn, even though no water from the piston reached the wheel. CS Miller (talk) 10:08, 5 June 2014 (UTC)[reply]
That's the speed the energy travels at. The wavelength of such a wave is not the distance any putative individual electrons would travel (once you get into the model, the concept of the "individual electron" becomes problematic, but at this level of thinking, we can just go with it). Consider the picture someone posted above. The individual dots are moving much shorter distances than the wavelength. Each dot only moves about 1/20th of a wavelength. You're confusing amplitude (the distance moved by a single element within the medium) versus the wavelength (the distance between successive wave pulses). --Jayron32 13:10, 6 June 2014 (UTC)[reply]
Indeed. Going back to a hydraulic analogy (with the usual warning about not-quite-perfect analogies), consider a syringe full of water. As you press on the plunger at one end, even if you press veeeeeeery slowly, water comes out the other end almost immediately. (Actually, for a regular human eye, you won't be able to see the time between pressing the plunger and water starting to move at the other end.) The speed that individual volumes of water move down the syringe has essentially nothing to do with how fast information about a change in pressure travels down the syringe. A drop of water next to the plunger's head may be pushed along at a leisurely few millimeters per second, but the effects of the moving plunger propagate down the syringe at about the speed of sound (in water)—about 1.5 kilometers per second.
Similarly, applying an electric field to electrons at one end of a wire only moves the electrons themselves very slowly; in a copper wire, the so-called drift velocity of the electrons is on the order of a millimeter per second. (Snails typically travel faster than electrons in a wire.) But the electric field through which those electrons 'communicate' with each other moves information down the wire a lot faster: some significant fraction of the speed of light. If I poke the electrons on one of the wire, the electrons on the other end 'know' about it virtually instantaneously, even though all the electrons in the wire barely moved. TenOfAllTrades(talk) 13:37, 6 June 2014 (UTC)[reply]
As a variation of Jayron's carpet-rubbing analogy, imagine sawing through a log with a very long saw. When you push or pull your end the other end moves similarly, not because your end has traveled over there but because the saw is solid and resists attempts to change its length. Electrons in a wire are a fluid rather than a solid, but the principle is the same: they have a preferred density (the density that leaves the wire electrically neutral), so if you forcibly move electrons in one part of the wire, the electrons elsewhere move by a similar amount to preserve that density. -- BenRG (talk) 00:25, 5 June 2014 (UTC)[reply]
Electrons in a wire that is conducting alternating current indeed arrive nowhere and merely alternate forwards and backwards, see the image. Because electrons repel one another, the alternating movement is conveyed all the way from the power station to the light bulb. In the bulb filament material, electrons and nucleii are bound together more strongly than in the external cable, i.e. it has significant electrical resistance. As a result, the electron displacements jostle the filament atoms and heat them. With a mains supply frequency of 60 Hz (USA) or 50 Hz (Europe) an incandescent lamp flickers at 120 or 100 Hz respectively since both cycles deliver energy; such high flicker frequencies are imperceptible. However you may just notice the 60 or 50 Hz flicker in a failing fluorescent light tube if only one electrode is emitting. 84.209.89.214 (talk) 00:48, 5 June 2014 (UTC)[reply]
Related to above discussion, one can see the on and off fluctuation of a normal fluorescent lamp as follows. Equipment needed: a fluorescent lamp, a light coloured (preferably white) stick a couple of feet long (a white ruler for instance or a short white curtain rod), and a dark background (eg, a dark wall or a blackboard). Stand so that the light is hitting the stick and so you are looking at the stick against the dark background. Then wave the stick rapidly from side to side, observing the light reflecting from the stick. If I have explained this correctly, you will see alternate light and dark bands, light from those times when the electrons are moving rapidly in the fluorescent tube and dark from the times when they are not moving or moving only slowly.
If you do the same experiment with an incandescent lamp, you will not see the bands since the lamp filament is hot and has a "steady" glow. CBHA (talk) 01:43, 5 June 2014 (UTC)[reply]
This also works with CRT TVs and CRT computer monitors. StuRat (talk) 02:22, 5 June 2014 (UTC)[reply]
... and the effect is even more noticeable with LEDs powered from an AC source (depending on circuitry). Dbfirs 12:52, 5 June 2014 (UTC)[reply]

Incandescent filaments don't emit light because there's a current flowing through them; they emit light because they're hot. Due to the thermal capacitance of the filament, it takes a couple hundred milliseconds for an incandescent bulb to turn on or off,[1] which is a lot longer than the four or five millisecond period between maximum current and no current or vice versa of 50 or 60 Hz mains electricity. So even if your eyes could respond that quickly, there really isn't a whole lot of variation in how much light is emitted from the filament over the course of a cycle, anyway. Red Act (talk) 02:47, 5 June 2014 (UTC)[reply]

Note that the animation provided above shows a transmission line several wavelengths long. --ToE 15:22, 6 June 2014 (UTC)[reply]

Testing of prescription drugs for animals

[edit]

When new prescription drugs are developed, there is a lengthy process involved. In a nutshell, drugs are tested on animals – and eventually on humans – before the drug is allowed on the market for general consumption. And this is all overseen by the FDA. My question is: what is the process for drugs that are prescribed to animals? Do they have some process? Do they do some kind of testing on other animals first? Is this overseen by the FDA? So, what is the analogous process for animal prescription drugs? Thanks. Joseph A. Spadaro (talk) 04:04, 5 June 2014 (UTC)[reply]

Note that in many cases animals are given the same meds as humans, such as antibiotics. In that case, presumably once they are proven safe for humans, they are assumed to be safe for animals, too, although this isn't necessarily always the case. StuRat (talk) 04:16, 5 June 2014 (UTC)[reply]
You misspelled that word. The correct spelling is "overseen" (note the quote marks). And you need to wiggle your fingers when you say it out loud as well. --Jayron32 04:26, 5 June 2014 (UTC)[reply]
Citation required. 84.209.89.214 (talk) 23:56, 5 June 2014 (UTC)[reply]
For the UK, I found this document from the Goverment department DEFRA, which gives guidance on the testing of veterinary drugs. For farm animals I would guess that there's some requirement to test that drugs used on them do not have dangerous effects for consumers on their meat or milk. See antibiotic use in livestock, for example. AndrewWTaylor (talk) 09:17, 5 June 2014 (UTC)[reply]
In some cases, it's just a matter of availability. When our vet prescribed some pain killers for our dog, on a long-term basis, I looked up what they were (Yeah! Wikipedia!) so I could try to find a cheaper source for them. I discovered that these were drugs that had been designed for human use but had fallen out of favor with physicians for whatever reason. Because dogs can't tolerate many of the modern painkillers that humans use, they are basically stuck using an obsolete drug. Fortunately, they are still manufactured for human use in parts of the world like Russia, and various African countries - and that's the source of the drug for pets. It's hard to ask an animal whether the medication is alleviating pain or not - so knowing that it works in humans means that they probably only had to test for toxicity in each animal species they use it for. This is an interesting role-reversal, where human testing preceded widespread animal use! SteveBaker (talk) 13:40, 5 June 2014 (UTC)[reply]
Of course you can usually tell from an animal's behaviour whether it's in pain or not. Richerman (talk) 16:40, 5 June 2014 (UTC)[reply]
Yes and no. Animals often "suck it up" and keep the pain to themselves. Probably part of their survival technique. For example, in 1973, Secretariat (horse) had an abscess in his mouth and was off his feed, costing him in the Wood Memorial. Someone finally opened his mouth, checked it out, and treated it, just in time to recover and run for the Triple Crown. The horse had not given any direct indication he was in pain. He was just living with it. ←Baseball Bugs What's up, Doc? carrots20:11, 6 June 2014 (UTC)[reply]

Thanks, all. Joseph A. Spadaro (talk) 19:55, 6 June 2014 (UTC)[reply]

Ocean pH variability

[edit]

To quote a source of which I am skeptical: "Human CO2 contributions will decrease ocean pH by 0.3 over the next 100 years while sections of the world's oceans change by nearly 5 times that amount in a given year."

Does ocean pH change considerably from year to year? Is there evidence that the year to year fluctuation in ocean pH is much greater than the projected decrease of 0.3 pH units in 100 years. Thanks, CBHA (talk) 04:12, 5 June 2014 (UTC)[reply]

Ocean acidification looks like our lead article on this topic, but I don't actually know anything about it myself. DMacks (talk) 04:23, 5 June 2014 (UTC)[reply]
(edit conflict) It should be noted (and may even be misunderstood by your source!) that pH is a magnitude scale; that is it is a logarithmic (rather than arithmetic) scale. In simpler terms, it is a scale based on multiplication rather than addition; a pH difference of 1 unit represents a difference in acid concentration of 10 times; a pH difference of 2 units represents a difference in acid concentration of 10x10 or 100 times; pH difference of 3 is 1000 times different, and so on. A pH decrease of 0.3 represents a doubling of acid concentrations in the ocean, while 5 times that (5x2 = 10) would be a pH difference of 1.0. So, what the quote seems to be saying (assuming the writer actually understands the math involved) is that human CO2 generation stands to double the amount of acid in the oceans; while natural variation can lead to pH swings of 1 pH unit (10 times acid concentration). I find neither claim to be fantastical (but that doesn't make either claim true). Furthermore, logic dictates that even if both claims are true, that doesn't necessarily lead one to any particular conclusion regarding the importance of human-generated CO2 to ocean chemistry... --Jayron32 04:23, 5 June 2014 (UTC)[reply]
It might be useful if you could indicate the exact source, so we could put the statement into context. (I can't find that exact quote through a Google search.) Guessing cynically, I would ask if it's from a climate change denialist of some sort, who is trying a bit of verbal sleight of hand to argue that the anthropogenic (human-caused) changes are smaller than the natural ones, so we don't need to worry about polluting the environment. (Did I guess right?)
There are a couple of problems there. The obviously slippery bit is the part that says "sections of the world's oceans". "Sections"? How big? Which ones? What fraction of the entire ocean's area or volume? (If I stood on the seashore and squeezed a lemon into a bit of calm water, it would be technically accurate to say that I changed the pH of a "section" of the world's oceans by several units.) Yes, there are significant variations in pH of surface and coastal waters in some areas, largely due to seasonal effects (spring snowmelt washing minerals into the water, temperature changes shifting chemical equilibria, seasonal changes in sunlight and atmospheric CO2 affecting growth of algae and phytoplankton...the list is quite long). Organisms that live in these sorts of waters have adapted to these seasonal effects, but there are also very significant volumes of ocean water that don't experience these large annual swings, and for which a 0.3 pH unit shift is a big change. Comparing a change in the average pH of all the Earth's oceans with variations in pH of comparatively small volumes in specific areas is comparing apples and oranges.
Instead of pH, let's consider a another property of the oceans: sea level. "Some sections" of the ocean (Bay of Fundy) have natural tides of 15 meters (50 feet). Despite that, a change of just one-fifth of that (3 meters, 10 feet) in worldwide mean sea level would cause some pretty serious problems.
The second sticky bit is that the writer seems to be glossing over the effect of adding a 0.3-unit change on top of an existing variation, shifting the mean and minimum pH downward. As noted, the pH scale is logarithmic. 5 times 0.3 is 1.5 pH units, corresponding to a (roughly) 30-fold change in concentration. 0.3 pH units, as noted, is an additional 2-fold change—that moves the maximum acid concentration to sixty times the original seasonal minimum. TenOfAllTrades(talk) 15:16, 7 June 2014 (UTC)[reply]
At a wild guess I'd look for things in the El Niño - Humboldt Current, since these famously bring cold deep water to the surface. Note that if that is the reason, the amount the pH changes will tend to be a function of how much CO2 and other factors such as acid rain affect the surface water. In other words, the more you pollute the ocean, the more the level of the pollution changes when you stir it hard. Wnt (talk) 04:59, 8 June 2014 (UTC)[reply]

Tree-huggers

[edit]

Hi,

Can this be useful for humans too?

Thanks. Apokrif (talk) 11:58, 5 June 2014 (UTC)[reply]

Trees are certainly cooler than the ambient air temperatures - so snuggling up against one of them ought to lose some heat. However, it's going to reduce the airflow over part of your skin surface - which will inhibit sweat evaporation and eliminate "wind chill". So, given our physiology, it might not be a net win. Much will depend on the air temperature and prevailing wind speed. Perhaps Koalas are like dogs and do not have sweat gland over most of their body as we do? If that's the case, then the downsides of tree-hugging wouldn't be as significant as it might be for us humans. It's a tough question to answer because it's so dependent on wind speed and ambient temperatures. SteveBaker (talk) 13:30, 5 June 2014 (UTC)[reply]
Give it a try, see how it goes. I'd think it would be of marginal use in an urban environment, but if you go out to a nice old forest, evapotranspiration keeps the whole area cool, and the trunks are even cooler. If you don't have a forest handy, a nice tall cornfield is also very cool, as they have a very high leaf area index. P.S. Lying flat on the ground is how squirrels lose heat: [2] -- and it can work for humans in certain situations as well ;) SemanticMantis (talk) 14:45, 5 June 2014 (UTC)[reply]

What does pork have that beef and chicken do not have?

[edit]

What is the chemical composition of pork, and how is the chemical composition of pork different from beef and chicken? It is known that some people are non-tasters, tasters, or supertasters of PTC. Is there a similar chemical in pork that would make some people avoid pork due to the repugnant taste or smell? 140.254.226.205 (talk) 13:34, 5 June 2014 (UTC)[reply]

If you're really asking why do Jews and Muslims avoid pork, it's because hogs are considered "unclean", as they are "bottom feeders" the same as shellfish, catfish, etc. ←Baseball Bugs What's up, Doc? carrots19:29, 5 June 2014 (UTC)[reply]
I don't think the rationale is quite so straightforward (would ancient Jews even know what lobsters ate?), but I'm also not sure that's even what's being asked. Pork certainly has a very distinct smell and flavour, though that seems to have lessened somewhat as the rules for keeping pigs and what to feed them have tightened. As much as I love bacon, I could certainly see how someone who didn't like the smell/taste would be unable to get past it. Even if you don't care for chicken, the taste is so bland that it's probably something you could live with. But if you don't like pork? There's no hiding that or covering it up. Matt Deres (talk) 19:47, 5 June 2014 (UTC)[reply]
While pork certainly DOES have a chemical composition, i'm not sure discovering precisely what that composition is will give you any unique insight. In hindu culture, eating beef is forbidden and beef too has a unique chemical composition. The reason to prohibit any food has no logical explanation in the religious texts, at least much beyond “God says so” or it’s dirty or conversely: sacred. There are several “theories” as to why pork in particular is prohibited, but they are really just guesses that ‘sound good' and are logical, (like pigs eat carrion so are unclean (they don’t have to), or pig flesh resembles human flesh), however, the whole fundamental premise is illogical so trying to make logical sense of it could just as easily be a completely futile exercise. Vespine (talk) 04:53, 6 June 2014 (UTC)[reply]
A note: human tastes like beef, not pork. Plasmic Physics (talk) 05:09, 6 June 2014 (UTC)[reply]
So why do they call it Long_pig? 196.214.78.114 (talk) 06:33, 6 June 2014 (UTC)[reply]
Several notable cannibals, like Armin Meiwes, have actually likened the taste of human flesh to pork. Admittedly, others I have read likened it to veal, so I guess it depends who you ask, but you can't claim authoritatively that it doesn't taste like pork. Vespine (talk) 07:08, 6 June 2014 (UTC)[reply]
  • My understanding is that a large part of a meat's taste depends on the animal's diet. Pigs don't eat grass, and they do eat a lot of stuff that Chickes, usually cornfed, don't. The term "gamey" comes from the typical taste of game, rather than fodder-fed animals. One might also consider that pigs and humans have a higher and more mutually transmissible parasite load. Pretty much all fresh-caught whitefish tastes like chicken if eaten without freezing the same day it's caught. μηδείς (talk) 23:44, 6 June 2014 (UTC)[reply]

Flipping into "starvation mode".

[edit]

I asked this question a while back and didn't get any useful answers - so I thought I'd try again.

Most weight loss diets caution you that dieting too aggressively can flip your metabolism into "starvation mode" - where it uses far fewer calories (or some say, absorbs calories from food more efficiently). However, they are all over the map on how much it takes to make this happen. Some caution you that skipping even a single meal can be counter-productive in losing weight because of this effect - others say that you should take a break from your diet every few months to avoid this effect...and yet others fall somewhere between those extreme positions.

Clearly they can't all be right. So, three questions:

  1. Is there any scientific study to show how little food for how long it takes to flip this switch?
  2. How effective is this metabolic shift in terms of calorie consumption to maintain steady weight?
  3. What does it take to flip the switch back again?

Solid numbers from reputable sources are desirable here!

TIA SteveBaker (talk) 14:01, 5 June 2014 (UTC)[reply]

1) One thing to consider is that it may not be a single switch. That is, at different benchmarks different efficiencies kick in. One that's fairly well documented is that women stop having their periods at a certain low body fat percentage (see Amenorrhea#Hypothalamic), while others efficiencies, like how much glucose is used by the brain at various levels of caloric intake, might be harder to quantify (you'd need to starve people then give them PET scans, which would be unethical). StuRat (talk) 14:26, 5 June 2014 (UTC)[reply]
I don't have time to read these in detail at the moment, but Starvation response and this blog post may be a start. AndrewWTaylor (talk) 14:53, 5 June 2014 (UTC)[reply]
There are some very good and recent refs at Intermittent_fasting. Google scholar has plenty of good recent hits for /alternate day fasting/ as well. The overall trend of the findings seems to be that eating nothing(or much less) on alternate days can have specific demonstrable positive health effects. My understanding is that you have to look at the calorie intake as a time series. Eating a weeks' worth of food on one day and fasting the rest will almost surely trigger a starvation response. But (keeping total weekly calories constant) alternate day fasting seems to trigger no starvation effect, and actually increases fat metabolism. I of course haven't read all these papers just now, but skimming a few abstracts leads me to conclude that the suggestion that skipping one meal can trigger starvation response is just silly. SemanticMantis (talk) 14:55, 5 June 2014 (UTC)[reply]
"Eating much less than nothing" ? :-) As for skipping a meal, "starvation response" is not the correct term for it, but it might make you hungry enough that you eat whatever junk food comes your way, so could sabotage a diet in that manner. StuRat (talk) 15:01, 5 June 2014 (UTC)[reply]
I'm surprised that with glucose meters being so cheap and widely available, that there is not more discussion of using them for routine dieting by non-diabetics. Looking up such search terms online finds a few oddballs who say that yes, indeed, it provides a wealth of understanding, but I'm not having so much luck finding serious study of what happens when people adjust their diet plan in direct relation to specific blood sugar goals within the normal range. It's pretty well established that fasting or very low calorie diets are good when you have actual diabetes, because they put blood sugar lower - but if a dieter is having relatively low blood sugar and feels ravenous, I don't know that proves it's a good time for an indulgence, though I suspect so. Wnt (talk) 00:27, 6 June 2014 (UTC)[reply]
I believe calorie restriction was a treatment for diabetes before insulin injections became available. However, with insulin, there's no need to suffer all of the problems caused by a severely low calorie diet, such as lack of energy. Of course, diabetics do need to keep their weight down, as obesity is a causative factor in Type 2 diabetes and a contributing factor in Type 1. StuRat (talk) 05:41, 6 June 2014 (UTC)[reply]
What I'm thinking of is closer to [3], where a 4-day starvation diet is sufficient to restore the hypothalamic response to glucose ingestion, most probably increasing peptide YY production to make insulin more effective. This results in improved blood sugar levels with half the (endogenous) insulin level. Wnt (talk) 12:52, 6 June 2014 (UTC)[reply]

Etymology

[edit]

Anyone know the etymology of the suffixes -ane, -ene, and -yne in the terms alkane, alkene, and alkyne?? Georgia guy (talk) 19:06, 5 June 2014 (UTC)[reply]

Have you read -ane, -ene and -yne? Richerman (talk) 19:14, 5 June 2014 (UTC)[reply]
The -ane is generally used for the single-bonded hydrocarbons. The -ene is generally used for the double-bonded hydrocarbons. The -yne is generally used for the triple-bonded hydrocarbons. The -ane was proposed in 1866 by German chemist August Wilhelm von Hofmann (1818-1892) to go with -ene, -ine, and -one. They have no real meaning in themselves. Scientists can be creative, can they? 65.24.105.132 (talk) 19:15, 5 June 2014 (UTC)[reply]
Yes, and I don't see any info about the etymologies. (response to Richerman.) Georgia guy (talk) 19:17, 5 June 2014 (UTC)[reply]
Checking EO, which may or may not help:[4][5][6][7] (There is no -yne entry) ←Baseball Bugs What's up, Doc? carrots19:23, 5 June 2014 (UTC)[reply]
OK, -ene is a Greek feminine patronomic suffix, [8] -ane is coined from that and -yne is a variant of -ine which is "via French from Latin -ina (from -inus) and Greek -inē" (see their respective entries). Richerman (talk) 19:33, 5 June 2014 (UTC)[reply]
65.24.105.132 and Richerman have answered the question between them. If their answers seem to contradict each other, it is because these suffixes have a dual etymology. Before 1866, the Greek suffixes -ene and -ine (-ήνη, -ίνη) were used in the names of various hydrocarbons (according to OED, -ane, suffix). The selection of the suffix probably depended on the discoverer's native language or on the history of the compound. For example, -ène was used in French-derived words such as methylène, while the Germans went for words like Benzin. The name of each hydrocarbon had its unique etymology. In 1866, Hofmann came along and proposed a system in which -ane, -ene, -ine, -one and -une had specific meanings. The first three are still in use (although -ine got changed to -yne by the International Union of Chemistry in 1931, according to OED -yne, suffix). So to return to the original question, -ane was entirely made up by Hofmann to extend the original set of Greek suffixes, -ene was copied by Hofmann from a Greek suffix that was previously not used in chemistry (it meant 'an inhabitant of', as in Nazarene; and I suspect that he first used it to adapt the German Benzin to his system); while -yne was entirely made up by the IUC in 1931 from Hofmann's -ine—again, -ine existed before 1866 but did not have a systematic meaning; it was derived from Latin -inus as Richerman has said and meant just generally 'an extract'. --Heron (talk) 11:56, 7 June 2014 (UTC)[reply]

"Spreading branches"

[edit]

My guidebook on trees sometimes describes a tree as having "spreading branches". What on Earth is a spreading branch? What would a non-spreading branch be? That sounds like an impossible situation. Craig Pemberton (talk) 19:27, 5 June 2014 (UTC)[reply]

A stand of poplar, with upright columnar growth, not spreading branches.
It's a description of the tree's habit, [9]. Something like a lone oak has spreading branches, while e.g. a dense colony of poplar will have an upright structure. Elms are known for their 'vase shape'. This document from the USFS describes a few of the different terms, including 'oval', 'columnar', 'vase shaped', etc. [10]. But really, check in the front of your book. There should be an introduction explaining the key terms. It might also have a gallery of silhouettes that give an archetypical example of each shape. This paper talks a bit about tree habits, and has some decent illustrations [11]. Growth habit isn't actually a very good trait for beginners to use heavily, because habit is affected by local light and soil conditions as well as species, but it can help experts easily rule in/out certain species at a distance. SemanticMantis (talk) 19:56, 5 June 2014 (UTC)[reply]
Such as the larch, SemanticMantis? (I've added "http://" to your third external link to make it operative.) Another reason why growth habit is unreliable is that it is usually evident only in mature trees. A sapling or youthful oak, elm, and horse chestnut may have similar shapes, though mature ones may be distinguishable from quite a long way away. Deor (talk) 20:27, 5 June 2014 (UTC)[reply]

Why the polemic between Bible / evolution?

[edit]

The Bible is full of strange theories, why is evolution at the center of the conflict science/religion? Couldn't the conflict orbit around some other biblical theory? OsmanRF34 (talk) 22:20, 5 June 2014 (UTC)[reply]

I suspect most contributors here on the Science desk, inevitably coming from a scientific perspective, wonder the same thing. The question really needs to be aimed at those religious folk who keep trying to pick the fight. HiLo48 (talk) 22:41, 5 June 2014 (UTC)[reply]
The concepts that would be in potential conflict with evolution would be the concept of a young Earth, and the concept that people have a different destiny than other animals; people go to heaven or hell while other animals don't. Are there other "strange theories" that are found throughout the Bible, rather than just in one or two passages? How many of the "strange theories" seem to be in conflict with physical evidence that is available to anyone who want's to visit a museum, as opposed to being in conflict with some other culture's version of history, or some other culture's myths? Jc3s5h (talk) 22:49, 5 June 2014 (UTC)[reply]
If you take the Bible literally to be a true story of how the universe and the Earth were created in a few days and how mankind was created in God's image and given dominion over the animals then evolution can't possibly be true. You can only square the Bible story with evolution by believing that the former is allegorical rather than the literal truth. The main reason why Darwin agonised so long before publishing his work was because he knew the problems it would cause in a Christian society. Richerman (talk) 23:04, 5 June 2014 (UTC)[reply]
Obviously most religious believers do take the Biblical stories as allegorical, because most of them do not seem to see a massive conflict between religion and science. The issue only exists for a small subset of believers. The OP should ask them. HiLo48 (talk) 23:10, 5 June 2014 (UTC)[reply]
A larger subset than than you may think Richerman (talk) 23:23, 5 June 2014 (UTC)[reply]
Thanks. I'm not American, and that's sad. I still say this is not a question for the Science desk, but one to ask of those "believers". HiLo48 (talk) 23:50, 5 June 2014 (UTC)[reply]
Controversy involving religion and science focuses on the teaching of evolution in schools mainly in conservative regions of the United States and there is little serious debate on the subject in other countries. Debate persists over the historicity of the biblical flood narrative where mainstream science regards the geometry and species capacity of Noah's Ark as implausible. 84.209.89.214 (talk) 23:35, 5 June 2014 (UTC)[reply]
We do have an article on Level of support for evolution. Richerman (talk) 23:40, 5 June 2014 (UTC)[reply]
It's nothing to do with a young earth, plenty of old earth creationists do not accept evolution. The conflict is because it deals with "where we came from?" and "why are were here?". Is there any more fundamental question you can ask? Vespine (talk) 23:46, 5 June 2014 (UTC) (Edit:I obviously meant "why are we here?" it was a typo)[reply]
Yes. How to form a question in English? 84.209.89.214 (talk) 23:53, 5 June 2014 (UTC)[reply]
It is such a major point of conflict because of the obvious contrasts between the two concepts. Simply put, at face value, accepting one requires the inherent rejection of the other. It is also the easiest concept to argue against. Wholesale rejection of evolution makes one a willful ignoramus, wholesale rejection of Creation undermines the core concepts of one's faith, which is why there are a 101 different compromises and interpretations to balance them out. Plasmic Physics (talk) 23:59, 5 June 2014 (UTC)[reply]
Evolution isn't really at the center of it. Every time astronomers say that a galaxy is billions of light years away, or a geologist dates a non fossil, it's the same conflict. I do feel however that more can be done to reconcile the situation if people recognize that the timeline for the author's biography on the book jacket isn't the same as the timeline for the fictional characters in Middle-Earth. Wnt (talk) 00:15, 6 June 2014 (UTC)[reply]
The question asks about the controversy over evolution, as against other possible sources of controversy in the Bible. I have read all of the Gospels, and the book of Acts, as well as many other books of the NT, plus Genesis and Exodus, and various other books of the OT. Nowhere can I find anything that science could strongly challenge based on any historical or geological/paleontological evidence, save for the first 6 chapters or so of Genesis. There is simply nothing else to refute. The Resurrection might be scientifically inexplicable, but it is not hard to accept that an all-powerful God could suspend the laws of the universe once for his own son. The question is whether there is any way to refute it, since the only evidence is the testimony of a few people in the NT, and there is no claim that Christ appeared to any but his own disciples. That would suggest neither proof nor contradiction, since any differing accounts could be put down to different memories. That might present a theological challenge, but there is no scientific one. So if you allow for miracles, it is hard to use science. But it is different with evolution: if you deny that God miraculously altered the fossil record to make it appear as though the world were very old, then every fossil, every act of carbon dating, etc is still a serious problem. The only other thing that might be problematic is the story of Exodus, since it makes some very falsifiable historical claims, but this is not in the ballpark of the claims of Genesis. IBE (talk) 04:02, 6 June 2014 (UTC)[reply]
I would also count The Flood as problematic. Vespine (talk) 04:22, 6 June 2014 (UTC)[reply]
Undoubtedly. I meant to include it by the ref to Ch 6, although I didn't realise it went on for several chapters. So make that "first 9 or so chapters" IBE (talk) 08:20, 6 June 2014 (UTC)[reply]
On the other hand, if you suggest that God miraculously altered the fossil record to make it appear as though the world were very old, then you are implying by proxy that God is a false witness. What a pickle? Plasmic Physics (talk) 04:51, 6 June 2014 (UTC)[reply]
It's not an unreasonable idea. Day 1, God invents white light, comes up with a concept of "color", the spectrum, rainbows, refractions, reflections, and all manner of trippy special effects. The resulting Universe 1.0 might be a long video wonder of beauty in beauty, and probably some of the highlights still appear more or less unaltered in the current version, in nebulae, acid trips, the hues of chromium compounds. By version 3.0, God hashes out a concept of "matter", reworks the whole timeline beginning to end to include a fast, rather artificial process of planet formation, flat Earth, firmament, lots of kludges to allow a nice looking simulation where a Creator can work out things like whether mountains make more sense with the pointy end on the top or the bottom, and by the time the final release comes out there are even trees and grasses to give it all a really rich detail. Now to say that it's deceptive that this isn't visible in the history of the final version? I dunno, if you set up a character in that new Castle Wolfenstein game and he watches a TV newsreel of the war with the Nazis, would you expect it to look like a set of small black and white 2D frames with characters ticking along looting unseen schnapps from low-res treasure chests? That's just not how it works. As surely as we can create anything, we can understand to some extent what being a creator is like.
Of course, the science part that hangs us up is the question of how anyone would know what a previous version of the universe was like, and of course, any explanation we can give is really out there. Natural science should tell us nothing about it. But natural science can't tell us why we would "really feel" getting shot any more than one of those 2D characters in the original Castle Wolfenstein game. After all, both we and they are just electrical impulses directing the sound of an audible cry of pain. Consciousness remains unaddressed by science, the most common and significant and fundamental of all paranormal phenomena. But as long as its nature remains mysterious, we should take seriously the idea that people who seek after the true nature of things might be able to extract out of the Dreamtime some kind of understanding about how what is real came to be real and what is true came to be true. Wnt (talk) 06:30, 6 June 2014 (UTC)[reply]
The argument isn't about what evidence is missing, but instead what present evidence precludes certain histories. Plasmic Physics (talk) 07:10, 6 June 2014 (UTC)[reply]
I suspect that the problem is that the creation of mankind is the one story in the Bible that you can't easily dismiss. If you subtract any one of the other bible stories, you still have a viable religion. If we prove (and it's fairly trivial to do so) that there was no Noah's ark - then you say "Well, that's just an allegory - mostly a story for children" - and nothing much changes in your belief system. But if you deny that God created mankind - then everything else becomes rather shaky.
I recall that in the 1960's, people were very upset to hear that we are "descended from monkeys" - with some, even non-religious, people taking offense at this. Sadly, it gets worse, because we're also descended from various rat-like creatures, fish and maybe also some kind of slimey amoeba-like things. Not many things in science are such a radical slur.
That said, I'd have thought that people would be more rebellious at the idea that "free will" is essentially precluded by the laws of physics - but I'm not sure many people are aware of that - and even many serious scientists have a hard time taking that step.
Another possibility is that science has gradually pushed God into smaller and smaller roles. The "God of the gaps" problem. Every time physics explains something, it diminishes the amount of things that God might have done. God fits into the gaps where science has failed to deliver a sound explanation. A lot of religious people who are OK with evolution are unhappy to believe in abiogenesis (the step that took history from a bunch of chemicals floating in the primordial oceans to the very first replicating RNA molecule). This unexplained step in the chain from The Big Bang until present day gives them a "gap" into which they can squeeze their god. Science doesn't yet know how that abiogenesis step took place (although we have some ideas), so someone who wants to believe both science and religion, has a "get out clause".
But as science marches on, the number of those gaps is steadily decreasing. If we can figure out how abiogenesis happened, then we'll be able to explain everything from the first millisecond after the big bang until this very moment...so God will be squeezed into becoming the being who started things off at the beginning and never touched it again afterwards. Perhaps, seeing the writing on the wall, the fundamentalists have decided that "the buck stops here" at evolution. Even then, there are shades of reason...some people believe that evolution is true for all animals except humans, who were created by God. The science doesn't give any clue that this is true, but it's a way to insert a "gap".
If you define "God" as, in part, the creative force in the universe, then there is no "God of the gaps". ←Baseball Bugs What's up, Doc? carrots18:43, 6 June 2014 (UTC)[reply]
For that to be true, this "God" would have to obey all of the laws of physics - bye-bye omnipotence and omniscience. At this point, the word "God" and the word "Physics" mean the same thing, and poof! we don't need religion anymore. The only reason you need this god-thing is if there are things you can't explain that lie outside of the laws of physics - and now you're back with God-of-the-gaps again. SteveBaker (talk) 20:50, 6 June 2014 (UTC)[reply]
The flaw in that argument is the assumption that we know what God really is. ←Baseball Bugs What's up, Doc? carrots21:06, 6 June 2014 (UTC)[reply]
So if "God" is not something we know - then it's a word without a definition. We might as well worship "wibble-wibble-snarf-gak" - because we don't know what that is either. In order to make any kind of meaning here, you have to ascribe at least some definite attributes to this "God" thing. The attributes that are most commonly assigned are that this is a being with literally infinite powers...and even if that's all that we "know" (from the axiomatic definition of the word), it's enough to fall into the trap I describe. SteveBaker (talk) 15:28, 9 June 2014 (UTC)[reply]
The "trap" is of our own making - creating a word and then trying to make the phenomenon fit it, instead of the other way around. For example, the debate over whether a virus is a life form or not. If you like, you can equate God (or a subset thereof) to Nature, and it works. ←Baseball Bugs What's up, Doc? carrots16:46, 9 June 2014 (UTC)[reply]
Jehovah's Witnesses have discussed the "God-of-the-gaps" at http://wol.jw.org/en/wol/d/r1/lp-e/2007601?q=%22god+of+the+gaps%22&p=par.
Wavelength (talk) 19:17, 6 June 2014 (UTC)[reply]
Hmmm - they talk about "real chasms of plausibility that exist in Darwinian evolution" - I don't see any such chasms. Evolution is one of the simplest notions in science - and once you grasp it, it seems inevitable with the same degree of mathematical certainty as 2+2=4. It's not just "plausible", it's extremely hard to imagine how it could possibly NOT be true! Since their axioms are incorrect, the remainder of their argument is meaningless. SteveBaker (talk) 20:50, 6 June 2014 (UTC)[reply]
Of course Darwin wasn't the author of evolution. He was the author of natural selection, a model which went a long way towards explaining how evolution might occur.Dolphin (t) 12:49, 8 June 2014 (UTC)[reply]
Once science has a clue as to the how of evolution, they'll be better able to counter arguments of the JW's and other literalists. ←Baseball Bugs What's up, Doc? carrots21:07, 6 June 2014 (UTC)[reply]
Speaking of science, it's always interesting to watch scientists slug it out over what's real and what isn't.[13]Baseball Bugs What's up, Doc? carrots21:09, 6 June 2014 (UTC)[reply]
But we do know that. In any system where:
  1. There are entities that make copies of themselves...
  2. ...using some kind of a template that is also copied....
  3. ...and occasionally, the template gets changed in a quasi-random fashion...
  4. ...and the nature of the change results in the copied entity being either more or less effective at making more copies...
...then evolution is inevitable.
We can easily make artificial systems with those properties and - guess what? - they evolve. We observe natural systems under those situations, and they evolve too. Evolution is mathematically inevitable if those few conditions are met.
Natural' evolution comes about because:
  1. Plants and animals reproduce...
  2. ...using DNA (or perhaps, RNA) as the template....
  3. ...which is changed by errors in the copying process, by damage from radiation, etc, and/or as a result of sexual reproduction...
  4. ...and the resulting offspring are either better or worse at reproducing (in some given environment in which they find themselves)...
...so it's no surprise that natural systems evolve too. Evolution isn't something that's hard to explain at all! What would be hard would be to explain how (given all of the established facts out there) that life on earth could somehow NOT evolve! That would require some drastic changes in the way we understand our universe.
SteveBaker (talk) 15:28, 9 June 2014 (UTC)[reply]

An old, blind man makes a living by carving ornate designs and selling them for pennies by the roadside. Since he has no eyesight, he can never see his creations. They are extremely elaborate and his memory isn't what it once was. So before his customers take away their purchases, he quickly presses the carving into a block of wax to use as a guide for making another carving to sell - feeling the differences between the cast and the emerging new piece with his nimble fingertips so that each carving is an almost perfect copy of the one that just left the shelf of his humble market stall.

Once in a while he makes a small mistake when doing the carvings - and sometimes the wax is too warm or too cold and so doesn't take up the design properly. Usually, the mistake is unnoticable and nobody much minds it. Sometimes, it makes for a more ugly carving, and it's hard to sell, so he tries his best to copy them exactly. Sometimes they are so terrible that he ends up using them as kindling to warm his tiny hovel at nights. But once in a great while, the mistake adds a little to the design, adding a hint of sadness to the carved face of the small child weeping over it's mother's grave, or he misses out a leaf, simplifying a wreath of foliage surrounding the frame and adding an air of elegance to an otherwise overly-complicated design. These rare carvings sell more quickly, and by taking a wax cast of only the designs that his customers like, his next efforts are usually slightly improved.

As the years go by, the old man slowly earns more pennies, can afford to pay someone to cook and clean for him so he can make more carvings. The subtle shifts of his work over time edge them toward a sublime, but elusive, perfection, every curve and twist becoming achingly beautiful to his customers.

Even as the times change, fashion swings come and go, and the world seeks new artistic styles, his work follows along. Never quite the most perfect thing imaginable - but always tracking the shifting preferences of an increasingly jaded world. For decades, the little child weeps over the grave - but for a while, it smiles whimsically or grimaces - but it can always return to grief if the fashion demands it. Ironically, the old man has no idea what the child looks like...for he is always blind, endlessly following the curves of the wax and using his tools to turn it back into close-grained wood, turning the wood to wax and then back to wood. An endless cycle of repetition - yet changing minutely in each new day.

We don't know how the blind man made his first designs - even he has long forgotten that. Perhaps a flash of inspiration came from nowhere. Perhaps he copied them from some earlier artist from another village entirely. Maybe he just copied the forms he found in rocks that he found interesting to the touch. But what we do know is that his subsequent designs were not just mere copies - they improved and they changed with the times.

SteveBaker (talk) 16:05, 9 June 2014 (UTC)[reply]

A good metaphor for any kind of evolution - biology, language, even the kid's game telephone. ←Baseball Bugs What's up, Doc? carrots16:50, 9 June 2014 (UTC)[reply]
Yes, a good analogy, but totally irrelevant to the question of why the concept of evolution has the greatest conflict potential. Plasmic Physics (talk) 10:21, 10 June 2014 (UTC)[reply]

Thermal question - longevity comparison of ice in a cooler

[edit]

I was thinking about this the other day and it's one of those questions where I feel like the answer should be obvious but it's not to me. I have a medium-size cooler, one of those plastic jobbies that you can buy two or three small bags of ice for and keep your beer/soda cold all day while you go fishing or to the beach. Here's the question. For purposes of keeping the items cold longer, when I reach the point that the ice is mostly melted and I've bought two more bags of ice to add (thus once added I will have doubled the amount of H
2
O
in the cooler), which will last longer (or is it near equal): Me pouring off the still cold water in the cooler (without letting any of the ice still left our out) and then adding my bags of new ice, OR, adding the two new bags of ice to the still cold but almost completely melted existing contents? Please take it as a premise that the fact that the two tasks will take different amounts of time will have no significant effect on the results. If I've left off any information necessary for an answer, please advise. Thanks!--108.54.17.14 (talk) 22:20, 5 June 2014 (UTC)[reply]

If the ice is almost completely melted then the water, and hopefully the beer, will be at 0 degrees C. So, if you want to drink your beer before it reaches 4 degrees C then it seems likely that leaving the water in will delay warm up.
But it is more complex than that. The water is a better conductor of heat from the outside to the beer than ice, so to some extent this will offset the greater heat capacity of the ice box with the water in. I do not know which effect is stronger. Greglocock (talk) 23:19, 5 June 2014 (UTC)[reply]
The answer is add the ice to the water. The temperature outside of the cooler will be significantly warmer than inside the cooler and the rate of temperature change is proportional to the difference: The bigger the difference, the faster the change, in effect the colder it is inside of the box, the more loss you will get to the outside (well, that’s backwards, you are technically losing heat to the inside of the box, not losing “coolth” to the outside, but it’s essentially the same thing in this case). By tipping out the “warmer (but still cold)” melted ice you are throwing away a lot of the "coolth" and mass. Yes there are some complicating factors, but I've seen the experiment, probably on myth busters, and throwing out the cold water made the cooler warmer quicker. Vespine (talk) 23:44, 5 June 2014 (UTC)[reply]
I'd say that dumping the water may make the drinks colder initially, but for less time. Leaving the cold water in will make the drinks not quite so cold, but it will last longer. Of course, there are many other factors that go into the decision, like the weight, risk of spills, and possible other use for the cold water, like dumping it on yourself and clothes to cool down that way. StuRat (talk) 18:01, 6 June 2014 (UTC)[reply]
Or you could make the outside of the box wet which leads to evaporative cooling. Count Iblis (talk) 23:13, 6 June 2014 (UTC)[reply]
B.t.w., these sort of coolers are not all that good, they have rather poor thermal insulation, especially considering their weight. What I do when I need to keep things cool is the following. I take my rucksack, my winter jacket like this one, my thick wool sweater (which will keep me warm without a jacket at temperatures slightly below freezing) and a big plastic bag. I turn the jacket and the sweater inside out, close the zippers and move the jacket into the sweater. I put this in the plastic bag this then goes into the rucksack. The stuff that needs to be kept cool then goes inside the jacket in the rucksack. This is very lightweight and yet has a thermal insulation that is a lot better than that of traditional cooler. If you fill it up with your icepacks and a lot of beer, it may start to weigh a lot, but since it's all in a rucksack it's easy to carry up to a weight of 50 kg. Count Iblis (talk) 00:24, 7 June 2014 (UTC)[reply]
Styrofoam coolers have the advantages of being lightweight, effective, and inexpensive. On the downside, they are flammable and not very durable. You can also get a Styrofoam filled plastic cooler, but they are heavier and more expensive, too. StuRat (talk) 03:09, 7 June 2014 (UTC)[reply]