Wikipedia:Reference desk/Archives/Science/2013 May 8

From Wikipedia, the free encyclopedia
Science desk
< May 7 << Apr | May | Jun >> May 9 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 8[edit]

Predicting the history can be wrong[edit]

Is there any other field in science besides astrophysics and predicting the far distance events in astronomy involves speculations and unreliable guesses. My mind is straight-forward-thinking that is why I thought academic research papers are just well-written, I was never aware they have guesses and unreliable speculations. Plate tectonics:, I have seen several different websites discussing how will the continents reshape itself in the 100s of million year future, all the scientist just come up with 10s or different results. I thought prediction the geology history are quite accurate, because of the fossils we have gives us. I thought prediction 500 million years ago to 600 million years ago, dating things that far back are still pretty accurate. Between Mesozoic and Jurassic period, and 70 million years ago up to today geological scale, I never hear any observation error based on where the continents were placed. I there is errors found in Mesozoic, Jurassic, where may they have messed up?--69.233.254.115 (talk) 00:29, 8 May 2013 (UTC)[reply]

It's much easier to go backward than forward. Going backward is like putting together a jigsaw puzzle. For example, the east coast of South America fits neatly into the west coast of Africa, and when you line them up, the surface rock formations also match nicely. But going forward requires understanding the mechanisms that make plates move, and the simple fact is that we don't. We can project forward about 100 million years by assuming that plates will continue to move at their current rates, but beyond that, everything is guesswork. We know that plates sometimes change their direction of motion, but we don't know why. (The most popular theory is that the motion is driven by mantle plumes, but that theory is controversial and in any case nobody knows for sure how mantle plumes change over time.)
Even I, who like to think I understand this stuff to some degree, have been tricked by all the speculations. I have a National Geographic atlas that confidently shows the state of the Earth 250 million years in the future, with all the continents coalescing into Pangaea Ultima. It took me a lot of reading to figure out that this is really no more than one person's wild-ass guess.
But to step back from this, making predictions of this sort is simply part of the process of science. Scientists know that every prediction is based on theory and data, and the weaker the quality of the theory or data, the less meaningful the prediction. Geologists know enough not to take these plate tectonic predictions very seriously. The problem comes when predictions are fed to the general public, who don't have enough background knowledge to judge them. Looie496 (talk) 01:24, 8 May 2013 (UTC)[reply]

.

Probably the best, and these days the most important, given that governments are making decissions about it, is the science of climate change. Climate change boffins use computer models to predict what tempertures, sea level, and storm activity will be like 50, 100, or more years to come. Almost all expect all three to increase, but the range of predictions does not inspire confidence that this is a well understood science. All modelling is done by making simplifying assumptions - that is, what are at best judement calls, and at worst, no more than wild guesses, as to what factors to include and what factors to ignore, in order to make computer modelling simple enough to actually do.
The Australian Govt had in 2010, recognising that most Australians live and work in coastal areas, directed the Bureau of Meterology to issue maps showing the flooding expected, with high and low limits, so that town planning, investment, and planning for major remedial works can proceed, with climate change in mind, on some sort of rational basis. After some pondering, the BOM boffins decided on a sea level rise low limit of 800 mm and a high limit of 1100 mm by Year 2100. Quite precise you might say, but this is only a small fraction of the range of predictions made by various experts around the World. And the assumption of a 300 mm range in sea levels had an enormous efect of the magnitude of city areas computed to be flooded. It is difficult so say whether the BOM had some reasonable basis for their assumptions, or whether they decided they shouldn't frighten us too much.
Most computer models have disregarded variation in the Suns' output, which is a risky thing considering we don't fully understand the Sun's weather. Some boffins just ignored it or didn't think of it, some boffins have reasoned it out that it probably can be ignored; a few think it cannot. But, until we have at least a few thousand years of good data (we do not), it's really just a guess. Heck, there is not even a broadly accepted view as to whether the last sunspot cycle was normal or not.
As far as I know, all climate modelling has proceeded on the basis of natural phenomena plus the normal activities of Man in peacetime. The commom result of such modelling is a global temperture rise, which will cause a rise in sea level and an increase in storm activity (becaus ethere is more energy in the system). But what if there is a nuclear war, either local or large scale, sometime in this century? It is a quite possible, even somewhat likely, scenario. It would result in long term global COOLING due to high atmosphere dust, much as many (but not all) climate experts think that the saturation bombing of German cities in WW2 caused unusually cold winters in the Northern Hemisphere for a couple of years.
Some expert boffins (only some) in relevant fields consider the ocean current directions in the North Atlantic to be unstable, and a change from one quasi-stable state to the other could produce global climate change far outweighing anything Man is doing. Almost all climate change modellers have ignored this, or simple are not aware of it. Who is right?
I have read journal articles that say volcanoes are a natural source of greenhouse gasses and act to increase temperatures. Other articles have said that dust from volcanoes can lower temperatures. Which one will predominate over the next 50 or 100 years? Seems to me the answer can at best be only be a good guess, as when a given volacano will erupt, and what sort of eruption will occur, is a science not yet mastered.
Wickwack 120.145.46.40 (talk) 02:02, 8 May 2013 (UTC)[reply]
What other type of science besides Pangaea Ultima and Future of the astronomy stuff involves speculations. People told me earlier the whole purpose of science is speculation and guesttimate, otherwise there will be no point of science.--69.233.254.115 (talk) 02:34, 8 May 2013 (UTC)[reply]
People gave me an example of cold fusion. Does chemistry involves theory and future prediction. I never hear chemistry argue about any future events. I never hear predictions of the future and speculations in Biology area. What other scientific area involves prediction debate and speculation arguments.--69.233.254.115 (talk) 02:40, 8 May 2013 (UTC)[reply]
In the sense that scientists seek to understand phenomena, and develop working hypotheses and test them against measured data, and use these hypostheses to predict what new experiments will be worth doing (as distinct from Engineers who make practical decisions using the theries provided by scientists), it is correct to say the purpose of Science is speculation and guestimation. However, in most fields what gets predicted is known to within very close limits and is for all practical purpose a certainty. For example, I have been a practicing electrical Engineer for over 50 years. The theory has never let me down, though I have long learnt to watch for my own human error. Wickwack 120.145.46.40 (talk) 02:48, 8 May 2013 (UTC)[reply]
It's true that the purpose of almost all science is to produce explanations or theories from which meaningful predictions of the future can be made; but the range and accuracy of those predictions is highly variable. "What happens if I let go of this apple?" has an obvious and fairly specific answer; "What happens if I let go of these dice?" does too, but it may lack the specific detail the questioner wants; "What happens to the whole world in 50, or 50 million, years' time, if current trends continue?" has no one specific answer, but rather a range of varyingly likely guesstimates. AlexTiefling (talk) 11:22, 8 May 2013 (UTC)[reply]


You hve to look at what is based on undisputable physics. If you focus on that, you can even look into the future and get an accurate picture. E.g., we may not know the exact configurations of the continents 300 million years from now, but you have to ask if that is relevant. We actually do know quite well the big picture of how things will look like here on Earth over the next hundreds of millions of years. The most important factor is that the Sun will gradually get hotter. We can quite accurately predict the induce decline in CO2 levels and when this becomes so low that photosynthesis will stop. We can predict quite accurately when the oceans will evaporate away. That will then lead to a runaway greenhouse effect. Count Iblis (talk) 11:54, 8 May 2013 (UTC)[reply]

I think that what you're running into here is chaos theory. There are systems (and the weather is one of them) where the equations for what's going on are well known - but the final outcome can be drastically different depending on microscopic variations in the initial numbers. So in the case of the weather, if your thermometers are off by even a millionth of a degree - then your ability to predict the path of that hurricane sometime in the future can be off by hundreds of miles. This is popularly known as "the butterfly effect" - but it affects much more than the weather.

My favorite example of that is this: Take a free-swinging pendulum with a magnet on the end. Place a sheet of paper beneath it and place two magnets onto the paper labelled 'A' and 'B'. Let's do it in a perfect vacuum with totally frictionless bearings. Set the pendulum swinging, and it'll wind up pointing towards either A or B. Given the initial position of the pendulum, we can write down the equations for it's motion with complete accuracy and confidence. Now, let's experimentally determine which set of places we can launch the pendulum from to have it end up over A - and which set leave it over B. Let's put a colored dot onto the paper where we launched the pendulum from - a red dot if launching the pendulum from that point leaves it ending up over A and a blue dot if it ends up over B.

If you were to do that experiment over and over again, you'd discover large areas of the paper that were all red, large areas that were all blue and a lot of areas where the red and blue dots seem to be all over the place. If you investigated those areas by placing the pendulum between those dots and filling in the gaps between them, you'd still find regions with a mixture of red and blue dots. If you plotted the positions perfectly, then the smallest all-blue and all-red areas would be smaller than the size of an atom...infinitely small in fact. So in those regions, mispositioning the pendulum by even the width of an atom is enough to screw up the results.

When science is faced with chaotic systems like that - our ability to predict the past or the future from a given set of data becomes sharply limited by the precision of that data. Even though we know the precise equations governing magnets and pendulums, we sometimes can't predict where it'll end up. That's why we can't predict the weather - or know precisely how the continents drifted or any of a wide range of things.

That doesn't mean that we can't predict anything at all. Many systems are not chaotic - many others are chaotic but can be controlled such that we're working in non-chaotic regions of the "parameter space". There are place on the red and blue map for the pendulum where we can predict with 100% confidence where the pendulum will end up.

Even in cases where we do have chaotic systems, we can often come up with statistical answers that are still quite useful.

Astrophysics is an especially difficult area. We might measure the spectrum from a distant star, find some distinctive spectral lines, use that to calculate the red-shift, from that calculate the distance to the star, from that calculate it's true intensity, from that calculate what kind of star it is, from that and it's intensity, estimate it's mass, from that... well, you get the picture. Every one of those stages contains measurement errors and approximations. Making solid deductions from those results (taking into account the size of the error bars) is rather tricky - and you have to expect there to be more than one set of hypotheses that covers the same set of observations.

SteveBaker (talk) 16:58, 8 May 2013 (UTC)[reply]

The OP is misunderstanding how science works. New research papers are, by their very nature, unproven, speculative, and unreliable. On Wikipedia, they're considered primary sources and should generally not be used in articles. If a paper seems to have merit, it is included in review articles and meta-analyses. If the scientific community manages to replicate the paper's results repeatedly, or otherwise find independent evidence of its validity, the discovery might become part of the scientific consensus, and might make it into textbooks or Wikipedia.
Now for astrophysics, which is actually my field. Just like in any science, there are predictions about the future that are absolutely certain, and those that are highly speculative. I'm absolutely sure that I can tell you where Mars will be a year from now, to within a few meters. I'm sure that a transit of Venus will happen in December 2117. Scientists are almost as certain that the Sun will turn into the red giant, shed a planetary nebula, and leave behind a white dwarf. There are 100 billion roughly Sun-like stars in our galaxy; we can see plenty of other stars of similar mass and chemical composition as the Sun have either gone through this process or are going through it right now. On the speculative side, we don't really know if the Sun will swallow the Earth after turning into a red giant--existing models aren't accurate enough to predict that.
Regarding simulations: in the 21st century, computer simulations are an indispensable tool of almost every science. Humans are simply not smart enough to work out the equations of stellar evolution and predict what a star will do through its lifetime, because that involves more computation than a human could possibly handle. I'm sure Wickwack would agree that they're an indispensable tool in electrical engineering as well. The difference is that engineering is meant to use well-tested, well-understood and uncontroversial scientific principles to make a reliable product, whereas research is meant to discover those principles in the first place. Just like all of science, simulations involve simplified models and don't account for every possibility. For example, we could simulate where Mars will be tomorrow, but Wickwack seems to be worried that an alien could come and whack it off its orbit today. If someone else wants to model the alien-Mars interaction, they're free to do so, but the non-alien model has worked exceptionally well in the past, accounts for all likely scenarios, and is based on known physical laws. The same is true for climate science. A model based on known physical laws that accurately predicts past and near-future temperatures is our best guess for what will happen in the future. All realistic models that meet these criteria predict easily measurable warming, flooding of low-lying areas, and more extreme weather. Climate models probably don't account for the Sun suddenly changing brightness, but the Sun hasn't changed brightness by more than 0.1-0.2% in the past 2000 years, there's no astrophysical reason for it to change brightness, and other main-sequence stars show the same level of stability. They also don't account for a nuclear war, or for the chance that humans suddenly decide to stop emitting CO2 and manually sequester it out of the atmosphere. The best response to that complaint is "my model tells you what happens when there's no nuclear war. Somebody else can tell you what happens during a nuclear war."
In the end, computer models are judged in the same way as other hypotheses. If they predict things correctly, they're accepted. If not, they're improved or rejected. --Bowlhover (talk) 19:09, 8 May 2013 (UTC)[reply]
Some comments:-
  • Bowlhover's description of science is correct but rather idealistic. The World's body of scientists is just as contaminated with incompetents, tricksters, and honest mistake makers as any other field. Sometimes wrong theories get thrown out right smartly, some persist for years. Where there is a vacuum of knowlege, people fill it with dubious theories, but until the right experiment happens, or the error discovered, such theories can be accepted. Just look at some of the theories on why the dinosaurs dissapeared. Lots of theories not properly confirmed make it into textbooks. In the field of applied science, as distinct from pure research, newly discovered knowlege is often published first in books.
  • I would indeed agree that computer modelling is an indespensible tool in electrical and especially electronic engineering. There is an important difference between electrical/electronic modelling and climate modelling though: Electric and electronic modelling is very much simpler and on VERY solid theoretical ground. The implications of the necessary simplifications and assumptions are VERY well understood. Climate modelling has yet to have aquire a useful amount of accurate measured data.
  • I have no idea why Bowlhover thinks I worry about aliens coming and whacking Mars off its orbit. Not only is it clearly quite unlikely, if they do, they'll perturb Earths' orbit as colateral damage, and cause us all death and destruction. So, in the unlikely event of it happening, we'll be so stuffed nothing will have been gained by pondering it. No good worrying about whether your favorite TV programme will continue if you are going to die. More seriously, that's exactly what proper scientific (or engineering) modelling is about - making intelligent decisions about what to leave in and what to leave out. And that's exactly where climate modelling is weak.
  • Nuclear war (and even large scale conventional war) should be considered by those who make decisions based on climate change science. It is NOT satisfactory to model natural and peace time conditions separately to war conditions, as one thing all the boffins agree on is that there are positive and negative feedback systems in climate, and significant non-linearities. This means that there is coupling between the two - you simply cannot arithmetically add the effects of war from a war model to the effects of natural & peace effects from natural & peace models. If governments believe global temperatures will increase, they may enforce things that will put us in economic or actual peril should it happen that temperatures decrease. Strageties like reducing population (as China has been doing) will be a good strategy in either case. Strategies such as altering the atmosphere to reduce greenhouse effect just might make things worse before we realise it. Other strategies, such as setting up infrastructure to assist farming in new areas expected to become viable because of temperature increase (as has been proposed in Australia), and abandoning areas that expected to become unviable (as the Australian Govt seems to be doing by default) will just be an economic penalty should temperatures not rise as expected.
  • The variation in the Sun's output, as measured at the average Earth-Sun distance (but not as attenuated by the Earth's atmosphere) is about 1365.5 to 1366.5 W/m2 i.e., about 0.07% variation, (cf 0.1 to 0.2 % as Bowlhover claimed "over 2000 years"). However, accurate measurements are available for only ~30 years. Plenty of reasonable theory has been advanced to suggest that the average output over thousands of years may be significantly less. I will admit that this is controversial. We know that solar output is affected by and linked to sunspot activity, but we simply do not have enough data on sunspot activity to understand it. In the last decade the Sun has confounded experts who predict sunspot activity.
Wickwack 120.145.32.250 (talk) 10:03, 9 May 2013 (UTC)[reply]
Responding point by point:
  • Bowlhover's description of science is correct but rather idealistic. - Yes, there are plenty of errors made - but that's the point of not using primary sources. A single experiment with a single result is not generally accepted as "The Truth" until it's been independently reproduced at least once and published in some kind of higher level review.
  • I would indeed agree that computer modelling is an indespensible tool in electrical and especially electronic engineering. There is an important difference between electrical/electronic modelling and climate modelling though: Electric and electronic modelling is very much simpler and on VERY solid theoretical ground. -- The problems for climate modeling is the complexity of the interactions. However, the goal here is not to get a perfect year-by-year or month-by-month temperature reading - it's to gain enough precision to capture the trend and approximate rate. That's all that ought to be needed to spur governments to take drastic action. You don't need to know whether the polar bears will be extinct in 5, 10, 50 or 100 years - you just need to know that they are going extinct. You don't need to know whether 5, 10, or 15 tonnes of CO2 per capita is enough to cause serious problems. Your claim that we don't have enough accurate data is utter nonsense. You first have to define what you expect to get from this data. If you're trying to figure out the precise year at which rising sea levels would cause the Thames flood barrier to over-top, then yes - we need more data. If you just need to know that it's going to be OK out until it's planned replacement in 2030, then we have all the data we need.
  • I have no idea why Bowlhover thinks I worry about aliens coming and whacking Mars off its orbit. - Whatever.
  • Nuclear war (and even large scale conventional war) should be considered by those who make decisions based on climate change science. It is NOT satisfactory to model natural and peace time conditions separately to war conditions... - You can't say that scientific modeling is useless until every single possibility is taken into account. It's perfectly valid (and useful) to produce a simulation that says "this is what we predict will happen assuming there is no nuclear way". That produces a useful result and advances the discussion. Once you have a solid model that is limited in that it excludes this special case event - then someone else can come along and improve on it later by adding that possibility. You don't have to solve the entire problem in one bite of the cherry.
  • The variation in the Sun's output, as measured at the average Earth-Sun distance (but not as attenuated by the Earth's atmosphere) is about 1365.5 to 1366.5 W/m2 i.e., about 0.07% variation, (cf 0.1 to 0.2 % as Bowlhover claimed "over 2000 years"). However, accurate measurements are available for only ~30 years. - But if you say "we don't have enough data - so we can't calculate anything" - then science will cease. You merely need to be careful to adjust your error bars to account for the data you don't have - and apply suitable caveats to your answers. It's perfectly valid to come out with some prediction - with the caveat "This result assumes that solar activity remains as it has been for the last 30 years". Nothing whatever wrong with that. All of human endeavor is based on prediction with assumptions. You drive to work in the morning on the assumption that a giant meteor will NOT come crashing down on the freeway without getting really solid data about meteor impact frequencies. Sure, that's not a perfect assumption - but it's sufficiently good for the purpose.
Using small approximations and small data gaps to justify inaction is a truly stupid way to proceed. One has to weigh the probabilities against the cost. In the case of climate science, the probability that the climate science is wrong and that global climate change isn't going to happen is very close to zero...but it's not zero. Do we wait to close that 0.1% gap in our knowledge - or do we take action anyway? Do you decide "I won't drive to work today because I don't know the probability of getting smooshed by a meteorite impact to three decimal places." or do you say "Well...I've never seen one of those happening, and nobody I ever met saw one either - so it's probably unlikely enough to not affect the outcome of my decision - so I'll ignore it for the purposes of making this decision."?
SteveBaker (talk) 21:24, 9 May 2013 (UTC)[reply]
The common theme in Steve's responses is that he thinks that I claim that attempting to model climate change is not worthwhile, and haven't realised that in scientific development, you have to crawl before you can walk. In doing that he has read into my post something that was decidedly not there, and he has missed the point with regard to the OP's question - basically does any field of science use speculations and unreliable guesses to make predictions? The OP possibly used a too strong and emotive language, but the answer is essentially yes, and the most significant example in terms of political and economic importance is climate change science.
Nowhere did I say that researching and developing models for climate change is not worthwhile. In fact you can most reasonably, and should, infer from what I wrote that I believe that MORE effort needs to go into this. A lot more.
The difference between electronic engineering computer modelling and climate change computer modelling is that electronic modelling is much simpler and it is mature. Climate change modelling is not. Yet governments are making decisions about it NOW. Some decisions were put into law and implemented a while ago. I didn't say they shouldn't. I said their decisions are on shakey ground and might turn out to be not good decisions.
Decisions made by leaders, governments, businesses, military generals and professional engineers often have to be made in a timely way, and that often means basing decisions on incomplete, of unknown reliability, and/or imprecise, information. As a professional Engineer and engineering manager, I've made lots of just such decisions. But what you do when you have imprecise or unreliable information is that you assess the range of possibilities and choose actions that will work over that range, or at least be a sensible temporary course until better information becomes available. That's not what the Australian Government (my example) is doing - they have instead picked a narrow range and gone for a solution they think will suit that range (asuming it IS a solution to suit that range - but that is another topic). That is the essence of what I said, and is directly contrary to what Steve said using his driving to work example.
In Steve's comment about modelling based on assuming the Sun's output is the recently measured 1366 W/m2with 0.07% variation at mean Earth distance, for example, he completely missed the point. Yes, it would be stupid to wait for more data - that will take 1000's of years. Maybe longer. But it is ALSO stupid to assume that that is the value, and won't be anything different, seeing as we cannot yet accurately predict sunspot cycles. We need a two-pronged strategy: be flexible while improving the theory until we can accurately predict sunspot cycles. When we can, that will give a good assurance we know what we are doing, and it most probably will not take anything like 1000's of years. Steve thinks that we should just go on the measured data. That would be analogous to a gambler noting a run of early wins and continuing to place betts on that basis. He doesn't need to make more observations (though in the case of gambling that might help). He needs to develop a good understanding of the random or pseudo-random process theory behind whatever he's betting on. This is not saying that climate science is a gamble to that degree, but it is partly a gamble. — Preceding unsigned comment added by 124.178.140.2 (talk) 02:48, 10 May 2013 (UTC)[reply]
Wickwack 124.178.140.2 (talk) 02:01, 10 May 2013 (UTC)[reply]
I'll focus on solar activity, since SteveBaker has covered the rest pretty well. It's simply not true that we only have 30 years of records on the Sun. Sunspot cycles have been tracked by European observers since the invention of the telescope in the 17th century. We have various proxies for solar activity that stretch back over 10,000 years, mostly from cosmogenic isotopes like 14C and 10Be: [1]. There is no evidence that solar luminosity will decrease enough in the near future to counteract the effects of CO2, no evidence that such a decrease has occurred in the past several thousand years, no evidence that other Sun-like stars have such drastic changes in brightness, and nothing from theory to suggest it is probable. Even the 0.1% variation we talked about earlier is due to the solar cycle; the long-term average is much less variable.
It is entirely true that EE simulations can be done much more accurately than climate simulations, or for that matter, almost any simulation in any science. But suppose you need to build a probe that lands on Europa. Testing it on Europa beforehand is impossible, so the best you can do is use simulations with many uncertain parameters to model Europa's atmosphere and the behavior of your design in an alien environment. Some simulations say the probe will last 4 km, some say 8, some in between the two, but no simulation with realistic parameters predicts the probe can make it to the surface intact. Do you launch the probe anyways and hope that some unmodelled effect will make it land (instead of making it even more likely to crash)? Or do you redesign it, at some financial cost, so that simulations don't predict disaster? That's the situation with climate modelling right now. --Bowlhover (talk) 09:01, 10 May 2013 (UTC)[reply]
Bowlhover, you have missed the point too. Yes, we have sunspot records going back hundreds of years. And we have enough evidence to know that sunspot activity is linked to solar energy output. But what we don't have is sufficient understanding of sunspot activity to make accurate predictions of sunspot cycles. It's much like predicting the weather on Earth as SteveBaker mentioned. We know enough to know winter will follow summer, but predicting the mid-winter month's mean temperature for next winter is alittle difficult. The last sunspot cycle did not follow predictions. Having said that, we understand sunspot variation better than we undertsand solar energy output. You are correct in saying we have sunspot records going back to the 17th century. But we have accurate records of solar energy output for only the last 30 years. We are not in a position to predict what solar output for the next 100 or 1000 years will be. We can reasonably assume it will be about 1360 W/m2, but we are not in a postion to take it as accurate as measurements suggest, and we are not in a position to know what a reasonable range might be. It's a 2-part problem: We can't yet accurately predict sunspot cycles, which means we don't fully understand it, and we don't have enough data to fully understand the linkage between sunspots and energy output, and whatever other factors there might be that affect output.
Your analogy of planning a probe to Europa is exactly what I have been saying. I never said, as SteveBaker tried to claim, that climate modelling is not worthwhile. I said it is, but we need to do better. Using your Europa probe analogy adjusted a little, it is much like that simulations predict that the spacecraft will need heat shielding to protect it for between 30 and 90 minutes of time descending in the atmosphere, and every minute of shielding costs a staggering amount of money, and some expert opinion says that their models predict that with too much shielding, shield ablation will contaminate the atmosphere, which we would prefer not to do. It is bad strategy to decide to split the difference and put in 60 minutes of shielding. It is better strategy to work more on the simulations and improve the accuracy. It is better strategy to redesign the spacecraft so that less shielding is needed. Given that time marching on, it is even better strategy to do both at the same time.
Wickwack 120.145.219.196 (talk) 10:08, 10 May 2013 (UTC)[reply]

Nasal congestion and sleep[edit]

Whenever I have a cold I notice that my congestion is completely gone when I wake up in the morning, and that it comes back within a few minutes of waking up. Does this phenomenon have a name / has it been studied in the literature? Could it potentially be exploited in medications? DTLHS (talk) 01:19, 8 May 2013 (UTC)[reply]

It nearly always works the other way round for me. But either way, you can buy decongestant medicines from chemists. Wickwack 120.145.46.40 (talk) 01:25, 8 May 2013 (UTC)[reply]
Sinuses will drain differently depending on your attitude, but you really need to take it up with a doctor. μηδείς (talk) 01:42, 8 May 2013 (UTC)[reply]
My attitude becomes quite surly when my sinuses are blocked. Sometimes a sinus will open up and drain while I am sleeping, other times the blockage becomes worse at night. Edison (talk) 03:15, 8 May 2013 (UTC)[reply]
Nasal cycle might have something to do with it. Or getting up, holding your head in a different position. Ssscienccce (talk) 06:27, 9 May 2013 (UTC)[reply]

Can sun actually magnify electromagnetic waves as depicted in Three Body (science fiction)?[edit]

--朝鲜的轮子 (talk) 01:19, 8 May 2013 (UTC)[reply]

There are many scientific inaccuracies and terminology abuses in the Three Body (science fiction) article. The lead paragraph claims that the three-body problem is analytically unsolvable, which is false. The question about magnification does not make sense: waves are not magnified; only images get magnified. In principle, a wave can be amplified when it interacts with a plasma, such as the ionized portion of the heliosphere; this is uncommon. For the budding plasma physicists, here is Particle Acceleration at the Sun and in the Heliosphere, a good review-article from NASA Goddard, the center that operated Ulysses spacecraft during the 1990s to study solar plasma wave/particle interactions. There has been much research into wave/particle amplification by energetic plasmas, in the context of solar, terrestrial, and other physics.
I would ignore the scientific claims made in this science-fiction story - at least, based on the descriptions in our article. Like many works of fiction, scientific correctness has taken a second seat to dramatic license. Nimur (talk) 16:10, 8 May 2013 (UTC)[reply]
The three-body thing isn't exactly untrue. Henri Poincaré proved (back in the 1880's) that there is no analytic solution given by algebra and calculus - except for certain special cases. Solutions have to be arrived at by successive approximation. So it is "analytically" unsolvable - ie, there is no system if equations into which you can plug the initial positions, masses and initial velocities of three or more bodies and get out their exact positions and velocities at some time in the past or future. But that doesn't stop you from producing an arbitrarily accurate solution given enough time to iterate through the calculations.
But it's science fiction - taking liberties with reality in order to make a good story is the core of almost all science fiction. SteveBaker (talk) 16:34, 8 May 2013 (UTC)[reply]
In an easy sense, if there is actually such thing, it should already been applied in search for extraterrestrial intelligence, I guess.--朝鲜的轮子 (talk) 02:15, 9 May 2013 (UTC)[reply]
From the plot description, this book is so full of science errors as to really be not worth discussing here. Even if this "magnification" of electromagnetic waves were something real - the speed of light would still apply to these "magnified" waves - and the message would still take 4.5 years to get there. The synopsis said the aliens came from the nearest star system (at 4.5 light years) - but the nearest is Proxima Centauri - which at 4.24 light years. While Alpha Centari (4.36 ly) is a double star, Proxima Centari is gravitationally affected by it - this isn't any kind of horribly chaotic triple-star system. The two Alpha Centari stars orbit each other quite stably - and Proxima is too far away to have much effect on that. The idea that planets could be gravitationally ripped to shreds like this is unlikely! It's also evident that in a system capable of causing such horrible gravitational disruption to planets, no planets could possibly have accreted in the first place! The errors and screwups in this book go on and on - but in the end, it's a work of fiction - so put doubts away and just enjoy it...but don't worry about whether any of the crazy ideas in there are real - they aren't. SteveBaker (talk) 20:55, 9 May 2013 (UTC)[reply]

Bird question[edit]

"I am Tawny Frogmouth. Much bigger than a sparrow. So gorgeous they named a pinup girl after me!"

What bird species consume (1) the most insects by weight, and/or (2) the most flying insects by weight (in absolute terms, not relative to their body size)? 24.23.196.85 (talk) 01:55, 8 May 2013 (UTC)[reply]

As an individual bird of that species or as the whole population of that species? Vespine (talk) 02:18, 8 May 2013 (UTC)[reply]
As an individual bird. 24.23.196.85 (talk) 03:34, 8 May 2013 (UTC)[reply]
This is very tough to answer with scientific references. Do you have access to journals through a library, school or work? For now, I'll mention that purple martins are highly praised for their ability to eat/control insects. Farmers across the USA install martin houses (like this [2]) to attract the birds and keep down insect pests (though they don't eat many mosquitoes, despite the common claim). Swallows in general are famous bug-catchers, another candidate would be the barn swallow. Note swallows are specialized on aerial feeding. I think they are very efficient by relative weight, but some heavier birds that eat ground-dwelling insects might eat more total weight per day, such as robins or jungle fowl. I'm assuming per-bird, Vespine also has a good point of clarification above. SemanticMantis (talk) 02:30, 8 May 2013 (UTC)[reply]
If we're doing guesses, my candidate would be Tawny Frogmouth, based purely on the fact that it is quite large (the largest bird in its order) and unlike the jungle fowl, is almost exclusively insectivorous. I live in tawny frogmouth territory and have been fortuante enough to see and even photograph a couple of them. They are nocturnal and extremely silent fliers, so not usually easy to spot. We had one land right on our balcony once, while we were on it, and the only sound that alerted us to its arrival was when its claws clasped the rail, gave us all quite a start. Vespine (talk) 03:52, 8 May 2013 (UTC)[reply]
Up to 21 inches long, too. It's weird how that infobox picture makes it look... almost sparrow-sized. Evanh2008 (talk|contribs) 06:11, 8 May 2013 (UTC)[reply]
The picture lower down, under the heading Description, gives a more accurate impression. The camouflage effect is real too. I have some in my garden occasionally. They like sitting in a melaleuca tree, maybe four or five metres above the ground, and staring at us. HiLo48 (talk) 07:42, 8 May 2013 (UTC)[reply]
Gulls are known for being able to eat a large amount of food in one sitting. I don't know any figures, but I suspect that a large gull eating its fill of something like swarming alkaline flies (yes, they will do this) would end up with a fair weight packed in his crop. --Kurt Shaped Box (talk) 22:12, 8 May 2013 (UTC)[reply]
Part of the story about the Salt Lake seagulls is that they ate all the locusts they could handle, disgorged them, and then ate more, and so on. ←Baseball Bugs What's up, Doc? carrots→ 23:03, 8 May 2013 (UTC)[reply]
And another part of that story (or so the article says) was that the gulls had some human help, too -- the farmers would form a line and thresh the field in unison, driving the locusts toward the edge of the field where the gulls were gathered. 24.23.196.85 (talk) 04:38, 9 May 2013 (UTC)[reply]

Arthritis[edit]

Are Marine people mostly affected by arthritis than normal people? — Preceding unsigned comment added by Titunsam (talkcontribs) 11:06, 8 May 2013 (UTC)[reply]

Do you mean Marines, or people living in marine environments? AlexTiefling (talk) 11:18, 8 May 2013 (UTC)[reply]
And what makes either group not "normal"? -- Jack of Oz [Talk] 12:05, 8 May 2013 (UTC)[reply]
The OP's previous questions are constructed as if by one who does not speak English natively, so we could charitably assume that he meant "average". ←Baseball Bugs What's up, Doc? carrots→ 12:26, 8 May 2013 (UTC)[reply]
I don't see any statistics on that but I'd guess they would suffer more from osteoarthritis because of overdue stress sometimes. Normal exercise is supposed to be okay or if anything good but jumping down from walls can be counted as trauma I'd have thought! Miners and construction workers have an increased chance of rheumatoid arthritis but that's thought to be more because of stuff they breath in. Dmcq (talk) 12:38, 8 May 2013 (UTC)[reply]
  • I'd take it as asking, are people who interact with the sea more likely than others to suffer arthritis? 12:39, 8 May 2013 (UTC)
If that is so then I don't know of any reason for a link between living by or on the sea and arthritis, and we're always being told how good fish is for all of us instead of beef or pork. Dmcq (talk) 12:52, 8 May 2013 (UTC)[reply]
on the other hand, it's a cliche that people are always complaining that the dampness is making their rheumatism act up. Gzuckier (talk) 16:59, 8 May 2013 (UTC)[reply]
I suspect that's a different kind of dampness. The kind one gets in rainy environments, which are not necessarily the same as marine or coastal ones. -- Jack of Oz [Talk] 20:58, 8 May 2013 (UTC)[reply]

How would a Sulfur Aluminum battery compare[edit]

To standard car batteries, or batteries meant to provide long term propulsion for river-navigating trading vessels? μηδείς (talk) 12:36, 8 May 2013 (UTC)[reply]

Interestingly, Wikipedia doesn't yet have an article on the Aluminum sulfur battery, however there's more than enough literature to start one: [3]. I haven't looked too deeply into the articles in question, but it looks like most of the literature on the topic came out in 1993, there's a furious burst of articles from that time period, and then like nothing, indicating that the technology never really made it out of the R&D phase. The only mention of them I can find at Wikipedia is at Aluminium–air battery, which states (uncited) that "Aluminium-sulfur batteries worked on by American researchers with great claims, although it seems that they are still far from mass production. It is unknown as to whether they are rechargeable." So, there you go. There was some research in the area 20 years ago, there hasn't been much since, and so we don't have a lot of performance to compare. You can comb through the research I noted above in the Google search to see if anything turns up. --Jayron32 12:43, 8 May 2013 (UTC)[reply]
Is the strength of a cell related to the difference in electronegativity of the substances chosen? μηδείς (talk) 21:29, 8 May 2013 (UTC)[reply]
That determines the voltage, but not the cell electrical resistance, tendency to polarise (voltage drop under continuous load), ampere-hour capacity, capacity/weight ratio, and many other aspects of cell performance. Wickwack 120.145.65.205 (talk) 23:27, 8 May 2013 (UTC)[reply]
Can you or other readers summarize each of those concepts in a sentence on one foot? I could probably understand that response clearly at one level deeper of concretization from the abstract. For example, what is "tendency to polarise (voltage drop under continuous load)"? I can imagine that it has to do with the fact that a continuously flowing current realigns something. But I would be bullshitting in the same way I'd pass an exam on Dickens, and just as unsure. μηδείς (talk) 04:38, 9 May 2013 (UTC)[reply]
As I am recovering from comsuming a brewer's happy substance, I'm not sure I can answer while on only one foot, but here goes sitting down as I type:-
  • Voltage: More correctly in the case of a cell or battery, an Electromotive Force (EMF) - EMF is the electric tension between two connections, somewhat analogous to the pressure of water at the output of a pump.
  • Resistance: Any electrical conductor is not a perfect conductor - it resists the flow of electric current. The magnitude of this resistance to flow is called resistance and is measured in ohms. Thicker shorter conductors have less resistance that long thin ones, somewhat analogous to short fat pipes compared to long thin pipes (but don't take the analogy too far - the flow of fluid in pipes is a non-linear function, whereas electrical resistance is a linear property wrt dimensions of the conductor)
  • Polarisation: Drawing current from most types of cell causes temporary changes in the cell that increase resistance. For example, in a simple cell made of metal electrodes and a liquid electrolyte, the flow of current causes gas to collect around one or both electrodes, lowering the area of electrode in contact with the electrolyte. This increases the resistance of the cell and lowers the voltage, but if you switch off the current, the gas will disperse, and on reconnection, the cell will again supply full output. The EverReady brand used to make this disadvantange of their torch cells and radio batteries into a virtue with their marketing slogan "Nine Lives - bounces back for extra use."
  • Ampere-hour capacity: This is a concept that a cell of given size has a finite capacity to supply energy, such that current multiplied by time is a rough constant. It is a yardstick to compare cells by, but in practice it is far from constant.
Wickwack 120.145.32.250 (talk) 09:12, 9 May 2013 (UTC)[reply]
Also to clarify something else for Medeis (and others), electronegativity is merely a way to quantify bond polarization in a molecule, it has limited applicability outside of that very narrow usage. The underlying physics behind electronegativity is related, but only in a very broad sense, to the physics behind electrochemistry. The actual measurement relevant to the discussion here (and perhaps Medeis confused this term with electronegativity) is the standard reduction potential, which is kind of like electronegativity in the sense that both measure the "desire" of a particular substance to "attract" electrons towards itself, but the context of the two measurements is very different: electronegativity measures chemical bond polarization, while reduction potential measures the ability to generate a voltage in an electric circuit, and there's really no actual connection between the two concepts. --Jayron32 16:34, 9 May 2013 (UTC)[reply]
That's helpful. I did mean standard reduction potential, I just haven't studied this as chemistry chemistry since the 80's. How would that concept apply in relation to Al/S as opposed to other potential species? μηδείς (talk) 21:52, 9 May 2013 (UTC)[reply]

drafting - mechanical drawings[edit]

if a angular dimension is called out on a drawing (ie 30 degrees, hold) and no tolerance is given, is there any standard used? — Preceding unsigned comment added by 12.1.83.2 (talk) 13:54, 8 May 2013 (UTC)[reply]

As with any dimension on an engineering drawing, if no tolerance is given anywhere on the drawing, normal "good engineering practice" for the lowest cost suitable manufacturing technique applies. You can (mostly) expect that the tradesman will mark out 30 degrees on a small object with a standard 60-30 set-square. With ordinary care that will produce marking out to within +,- 0.5 degree or so. But what happens next? Is it to be cut with a hacksaw? An hand-held oxy torch? A profile cutter? Or is the drawing in electronic form to be fed into an automatic machining centre? These factors influence enormously what accuracy you get. The tooling used will depend on what finish is specified. Something cut with a milling machine can be expected to provide both an excellent finish and excellent accuracy. But say youy've specified some sort of plate to be cut from 18 mm steel plate by means of a hand-held oxy torch, and the side that is at 30 degrees is about 50 mm long. The resulting surface roughness will be about 1 mm, so worrying about cutting it to an accuracy any better than 2 degrees is pointless, and the tradesman won't worry too much about it.
Is the drawing for construction work? Mechnical parts? Construction work has standard tolerances, but I have almost no understanding of them.
Are you certain the drawing has no tolerancing information? Default tolerances and finishes are typically set out either as text or as symbols in the title block for mechanical part drawings.
Wickwack 124.182.22.141 (talk) 14:33, 8 May 2013 (UTC)[reply]
If the machining method is specified - then the tolerances may remain unspecified since they are "whatever that machine produces". SteveBaker (talk) 16:22, 8 May 2013 (UTC)[reply]
It's not clear whether Steve meant "there is no need to specify tolerances if the machining method is specified" or whether he meant "If you don't specify, you'll get whatever is produced". Neither is really true. Each machining process has its normal "good practice" tolerance, plus what it will do with a sloppy worker, and what it will do with special care. Where tolerances are not specified, its the "good practice" tolerance that you get. For example, if the machined surface is a cylinder turned in an ordinary centre lathe, a tolerance of 0.02 mm is normally achieved by taking normal care, checking with a vernier caliper, and not making any special effort. You can achieve 0.002 mm if you take special care, check with a well maintained micrometer, etc. If tolerance is not specified, but turning is specified or implied, then a toterance of 0.02 mm is what you'll get. So, specifying the machining method does in effect specify the tolerance. Note though that in the production of one-off lab equipment, hand-made prototypes, and one-off orders, the tolerances of hand-marking out add to the machining tolerances. For example, drilling a hole: Marking out the centre by hand will raely be better than 0.2 mm; centre-popping will add another 0.2 mm or so error, and drilling with a worn twist drill will add a bit more error. A good tradesman can use techniques to dramatically reduce such errors, but he'll only do it if you ask. Note also that specifing a dimension as an integer (as in 30o) specifies normal good practice tolerance, but specifying it with decimals, as in
30.00o
tells the machinist that that is the precison you expect - however doing it this way is not good practice. If precision is required, write it as in
+0.01
30.00o -0.01
How to specify tolerances, the symbols used, and how tradesmen should interpret drawings, is specified in detail in ISO Standards and in harmonised equivalents in other countries, eg AS 1100 in Australia. Wickwack 120.145.65.205 (talk) 23:18, 8 May 2013 (UTC)[reply]
The influence of the skill of the machinist isn't always a factor though - if you're using a CNC machine, a laser cutter, a 3D printer or any kind of computer-controlled machining system - then the precision you get is entirely independent of the operator. So if you're preparing CAD drawings for laser cutting (which I do all the time) - then writing down the precision is entirely pointless if you've already determined that the part will be laser-cut.
Worse still, the concept of "precision" for an angle is likely to be entirely meaningless for a computer controlled machine tool because they (mostly) use X,Y,Z drives and the angle of a long line will be as accurate as the mechanical construction of the frame of the machine. Although there will always be a positional error of some amount due to backlash in the transmission and the resolution of the X,Y,Z positioning system - that's not going to affect the angular precision.
These old-school technical-drawing standards are really kinda meaningless for these kinds of machine tools. For a 3D printer, the parameters of nozzle size and layer thickness are the things you need to specify, not angular precision. For a CNC tool, the tool selection parameters are more critical, and for a laser cutter, you'll get whatever is the best precision the machine produces, no matter what you ask for because there is no cost or speed benefit for demanding less.
In a modern design environment, the designer doesn't so much demand tolerances from the machine-shop - but rather knows the parameters of the manufacturing system (s)he's choosing to use - and making the design work within those limitations.
Furthermore, since the CAD drawing will almost always end up being the thing that actually controls the tool directly - the idea that other people are "interpreting" your drawing is fast becoming archaic too.
SteveBaker (talk) 13:34, 9 May 2013 (UTC)[reply]