Jump to content

Wikipedia:Reference desk/Archives/Science/2011 August 9

From Wikipedia, the free encyclopedia
Science desk
< August 8 << Jul | August | Sep >> August 10 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 9

[edit]

Sunken Nuclear Submarines in the Pacific Ocean and the Indian Ocean

[edit]

There are sunken nuclear submarines in the Alantic Ocean. But are there any sunken nuclear submarine in the Pacific Ocean and the Indian Ocean? Would these sunken nuclear warheads in the submarines, corrode and explode in the deep water? 99.245.76.40 (talk) 01:41, 9 August 2011 (UTC)[reply]

Absolutely not. The detonation of these devices is incredibly complicated and precise. That is one of the reasons why a nuclear warhead is so hard to make in the first place. For example, consider the very early, very inefficient plutonium device Fat Man that was dropped over Japan. Basically, it had a hollow sphere of explosives surrounding a sphere of nuclear material which was just under critical mass. When the explosion went off, the pressures had to be almost perfectly uniform from every direction in order to compact the sphere of nuclear material so that it reached critical mass. The modern devices are much more advanced; for example: they need to be "armed" before they will work. The worst that could happen in your scenario is that you'd get radioactive water; which we already have courtesy of our many nuclear industries. Fly by Night (talk) 01:53, 9 August 2011 (UTC)[reply]
The water is already radioactive. According to Uranium#Resources and reserves, the oceans are thought to contain about 4.6 billion tons of uranium, of which about 0.7% (30 billion kg) is U-235. That's many orders of magnitude more that the amount of weapons-grade uranium and plutonium that's ever been produced, let alone the fraction of that in sunken nuclear subs. -- BenRG (talk) 04:08, 9 August 2011 (UTC)[reply]
Just got finished watching "Life after People", in which it was stated that water would eventually penetrate the firing mechanism of the sunken missles of a sunken submarine and cause them to trigger the explosion because the chemicals in the fuse would react on contact with water. The missles in this case would not have to be "armed", as that refers to the firing mechanism and not to the explosive charge itself. The water damage would therefore bypass the safety precautions. Their explanation semed plausible; they seem to have done their research. Dominus Vobisdu (talk) 06:18, 9 August 2011 (UTC)[reply]
See Nuclear weapon design#Warhead design safety. Most compression nuclear weapons (almost all are compression, rather than gun) have something in the centre that must be removed before the weapon can detonate correctly. Also, all the plastic explosives must detonate in the correct sequence (which may be all at the same time). If either of these steps do not happen correctly, then all the will happen is you have a dirty bomb, spreading the unchanged U-235/Pu-239 around the area. This is a problem from a contamination point of view - both uranium and plutonium are poisonous heavy metals, and the Pu-239 has a medium-length half-life (24,200 years), U-235's have life is long enough to be ignored. CS Miller (talk) 08:58, 9 August 2011 (UTC)[reply]
The design and operation of the weapons assumes that voids in the nuclear core are filled with air; the devices would not be watertight at deep sea pressures, so any leakage of water into the core would compromise the nuclear yield, which works only if all is nearly perfect. Additionally, if the weapon is deeply submerged, the explosive lenses may be compressed and have an altered geometry, again resulting in no yield or a low-yield fizzle, assuming the safety mechanism was somehow bypassed. Acroterion (talk) 15:34, 9 August 2011 (UTC)[reply]
Exactly. One major engineering problems that had to be overcome was to make sure the warhead stayed intact long enough for enough material to fission. Eventually, the device will blow itself apart, the mass will become subcritical, and the chain reaction will stop. Fly by Night (talk) 21:04, 9 August 2011 (UTC)[reply]
It is pretty implausible that submersion could ever result in a true nuclear detonation. The only water-based hazard I even know of regarding weapons design has to do with the Little Boy weapon, which because of its very high use of fissile material ran the risk of becoming a tiny reactor if you added water into its core. But that was a very specific design flaw, and even that would not have led to a true nuclear detonation (just a burst of radioactivity, and maybe a tiny explosion). --Mr.98 (talk) 18:48, 9 August 2011 (UTC)[reply]
There are no known nuclear submarines sunk in any ocean but the Atlantic. See List of sunken nuclear submarines. That doesn't mean they don't potentially exist — it just says what is publicly known.
However, if you are concerned less with the power source, and more with the weapons themselves, the Soviet sub K-129 did indeed sink in the Pacific, with a full nuclear weapons payload. (The submarine was not itself nuclear-powered — it ran on diesel — and was thus not a "nuclear submarine".") --Mr.98 (talk) 18:50, 9 August 2011 (UTC)[reply]
That and I assume the ships sunk at the Bikini Atoll test site in the Pacific are still radioactive. Googlemeister (talk) 18:59, 9 August 2011 (UTC)[reply]
Probably only weakly so at this point. The really hot isotopes would have already expended themselves by now, all these many decades later. But they are neither nuclear-powered submarines, nor nuclear armed submarines, so they strike me as kind of irrelevant. (There were 8 submarines involved as part of Operation Crossroads, but none were sunken there.) --Mr.98 (talk) 19:14, 9 August 2011 (UTC)[reply]

One point to make is that not all nuclear submarines carry nuclear warheads. The prefix nuclear refers to the propulsion system: a very small nuclear reactor. The British Astute class submarines are nuclear powered but only carry Spearfish torpedos and Tomahawk cruise missiles. Fly by Night (talk) 21:15, 9 August 2011 (UTC)[reply]

Explosive liquifaction

[edit]

Sometime ago, I watched a crime drama, like CSI, maybe it was, it doesn't matter. In the episode there was a CGI simulation of death by explosive liquifaction. Does this have real life inspiration? The shockwave liquifies the flesh and strips down the body down to the bones, before they are pulverised also. Essentially, like putting a hairdrier to a bowl of jelly/jello. (depending on your vernacular) All this occurs without thermal damage, untill after the body has been distroyed in this manner, so it is the shockwave alone doing this damage. Plasmic Physics (talk) 03:17, 9 August 2011 (UTC)[reply]

Underwater shockwaves can be damaging, Mythbusters has dedicated parts of at least three episodes to studying them, see MythBusters_(2010_season)#Episode_139_.E2.80.93_Dive_To_Survive and MythBusters_(2011_season)#Episode_170_.E2.80.93_Paper_Armor. MythBusters_(2008_season)#Black_Powder_Shark. Each of these looks at a different scenario: The first I cited is an air explosion while you dive under water. The second is an underwater explosion where you are floating on the surface. The third is an underwater explosion where you are also underwater; this is the important one for our purposes: Basically, if you and the bomb are underwater at the same time, you are dead. However, such shockwaves can't literally turn a human body into "goo" as you describe. It is far more resiliant than that. See also this page which explains the science behind underwater explosions pretty well. --Jayron32 03:59, 9 August 2011 (UTC)[reply]
I don't understand half of it, heh, but those ^ are interesting reads. Apparently high enough vibration frequencies can kill cats and mice and cause thermal damage by energy absorption. So basically sound waves do cause thermal damage, especially in boundaries of regions with different densities (e.g. bone/muscle). I would expect vibrations high enough to cause disintegration to also cause significant heating in the process. Then there's also cavitation and mechanical shearing. Used in disinfecting water by causing implosions in bacterial/viral cells, liquefying fat in ultrasonic liposuction, and histotripsy.
Anyway, with distribution of the force and the density of the human body, I would expect actual explosion shockwaves to cause impact damage/blunt object trauma rather than liquefaction. You'll go splat, but you'll also go splat anyway if you were hit by a train.-- Obsidin Soul 05:08, 9 August 2011 (UTC)[reply]

Ok, how does the type of damage vary with distance from the epicentre, from losing your toupe, to being completely atomised. Plasmic Physics (talk) 07:29, 9 August 2011 (UTC)[reply]

nfi :D See Blast injury.-- Obsidin Soul 07:43, 9 August 2011 (UTC)[reply]
Shock wave injury is also mentioned in Effects of nuclear explosions-- Obsidin Soul 07:55, 9 August 2011 (UTC)[reply]

Alternatively, we can do it Mythbusters style, and find out what will it take to give the same results? Plasmic Physics (talk) 08:07, 9 August 2011 (UTC)[reply]

You volunteering to be a test subject? :P -- Obsidin Soul 08:29, 9 August 2011 (UTC)[reply]

Unfortunately, no, strictly theoretically. Plasmic Physics (talk) 09:21, 9 August 2011 (UTC)[reply]

snake bites

[edit]

During a recent camping trip in Northern Ontario, my buddy and I got into a wee debate about snake bites and the survivability of such a bite based on the location of the bite. His position: there is a significantly higher chance of surviving a venomous snake bite (e.g. rattler) if one is bitten on the lower leg or foot as compared to the hand/arm. His reasoning: it is further from the heart, so one has a significantly higher chance of surviving. My position: it does not matter as once the venom is in one's bloodstream, it travels fast due, in part, to an increased heart rate. All of this is assuming, of course, the immediate administration of first aid and/or medical treatment. What are people's thoughts of this silly debate? 99.250.117.26 (talk) 05:16, 9 August 2011 (UTC)[reply]

From a technical point of view: yes, it does matter where one is bitten. Although, it depends on the type of poison. E.g. is it a neurotoxin or a hemotoxin? Which parts of the body does the toxin need to reach to cause serious problems? How does it get to these places? The difference may be seconds or minutes depending on all of these factors. Fly by Night (talk) 05:28, 9 August 2011 (UTC)[reply]
When a snake bites, it's unlikely that the fangs will penetrate a large blood vessel so that the venom directly enters the bloodstream. It's much more likely that the venom will be injected into soft tissue, from which it more slowly diffuses into the bloodstream. Your friend is therefore correct when he says that a snakebite on the foot is going to kill you slower than one on the hand. As Fly by Night pointed out, it also makes a difference what kind of venom is involved. Dominus Vobisdu (talk) 06:09, 9 August 2011 (UTC)[reply]
A few points:
1) The venom may be diluted to where it is no longer at a fatal concentration, when it finally makes it to the critical organs.
2) The venom may break down to where the level is no longer fatal.
3) Medical assistance, such as the administration of anti-venom may be available if the rate at which the venom spreads can be slowed.
So, for these reasons, the further the venom is delivered from the vital organs, the more likely you are to survive, especially if a tourniquet is used to slow the spread. Of course, as mentioned above, this is for a neurotoxin. For other types of venom, say that aren't life-threatening but could damage the area bitten, you might want to do just the opposite, and quickly get it into the bloodstream so it will be diluted to a concentration that's no longer harmful. StuRat (talk) 17:48, 9 August 2011 (UTC)[reply]
Tourniquets are seldom helpful and can be very dangerous, since they cut off blod flow to the limb. If they are left on long enough tissue in that limb will die and the limb may have to be amputated. Even when a tourniquet is appropriate, such as after an amputating injury, it is recommended that it should be released every fifteen minutes or so.Sjö (talk) 10:16, 10 August 2011 (UTC)[reply]
For some useful technical info on snakebites, go to about the 2:30 point of this video, and stick with it until the end which is at about 3:30.[1]Baseball Bugs What's up, Doc? carrots18:00, 9 August 2011 (UTC)[reply]

I'm not sure that the vid quite settled the debate between me and my friend, but I do have to admit it was "insightful." I concede to my friend's superior knowledge of snakebiting! 99.250.117.26 (talk) 01:42, 10 August 2011 (UTC)[reply]

Talk.Origins

[edit]

Why does Talk.origins often show links to pages of websites that refute their arguments in their pages with their arguments? — Preceding unsigned comment added by 110.174.63.234 (talk) 07:58, 9 August 2011 (UTC)[reply]

What exactly do you mean by "Talk.origins"? Please post a link.
Origins redirects to Origin which is a disambiguation page which lists other pages that relate in some way to the concept "origin". Talk:Origin is simply the discussion page for Origin. Roger (talk) 09:54, 9 August 2011 (UTC)[reply]
I assume that the OP means TalkOrigins Archive or maybe talk.origins but since the latter is a newsgroup it doesn't have "pages". The OP would be better off asking the person who wrote the pages in question, their email address is sometimes posted and if it isn't they're not very hard to find. If I may speculate there can be several reasons: To provide background for the article, to provide further reading (similar to our External links) or to prove that there really are people who make some ludicrous claim. Sjö (talk) 10:55, 9 August 2011 (UTC)[reply]
The TalkOrigins Archive often includes links to creationist sites offering counter-arguments. Of course these counter arguments regularly fail to refute the scientific position, but they may give a different impression to cursory readers. TalkOrigins does this as a matter of principle. They don't want to overwhelm, they want to inform and convince. Readers should be aware of all positions, and come to their own conclusion. It's not to different from our WP:NPOV - they won't try to represent absurd and unscientific positions themselves, but leave that to the peddlers of superstition. And they make access to those positions available without endorsing them. --Stephan Schulz (talk) 11:35, 9 August 2011 (UTC)[reply]

What is this specification standard?

[edit]

kindly help....my boss has said me to inspect a lot of chair.The specification standard contains "the backrest supporting beam width :460mm(L)X 80 X 40 14BG HR TUBES". i get that this is specification of backrest supporting beam width.But what is 14BG HR.it cant be thickness if 80mm and 40mm are inner and outer diameters.

thanx 122.169.149.238 (talk) —Preceding undated comment added 12:46, 9 August 2011 (UTC).[reply]

Wouldn't it be easier and probably faster to just ask your boss? Googlemeister (talk) 13:00, 9 August 2011 (UTC)[reply]
14 British Gauge (Imperial) hot rolled --Digrpat (talk) 14:14, 9 August 2011 (UTC)[reply]

Battery in digital camera

[edit]

Why do digital cameras have batteries? Couldn't they just use the incoming light to save a picture? (provided you don't want a flash, LCD screen, GPS or whatever on it). Quest09 (talk) 13:38, 9 August 2011 (UTC)[reply]

It's true that light contains energy, and fundamentally, a digital camera sensor represents a photoelectric process that converts photons to electric potential energy. But, the amount of energy in the light is very tiny, and the quantum efficiency (in other words, the volts produced per photon) is also very small. Today's sensor technology is constantly improving, but even on the best sensors, it requires input energy to drive specific parts of the sensor: an array of low noise amplifiers, analog to digital converters, and signal level shifters. (Each incident photon only provides enough energy to light up one single diode, not all the support circuitry - but if we measured this directly, we would have a very noisy picture). Then, to transfer image data to storage, a bus is used, which requires power. The image may be preprocessed by hardware and software, which requires a DSP or CPU, and finally, the image must be committed (saved) to nonvolatile storage. Each of these stages represent energy consumption, and with today's technology, that consumption far outpaces the energy per photon.
Even with hypothetical perfect future technology, every photon you use to drive a power circuit is one photon you aren't imaging, which is a fundamental limitation of your imager's quality. An equivalent-size and process imager will have better sensitivity, lower noise, and so on, by receiving power from an external power source. This is informally the "gain/power/bandwidth" product of the transducer/amplifier. It happens that there is a HUGE volume of research on the theory and practice of squeezing more out of each photon in a sensor, using every possible material-science and analog electronics trick known to science - but so far, no sensor has satisfactory noise performance or output levels unless powered externally. Nimur (talk) 14:15, 9 August 2011 (UTC)[reply]
The quantum efficiency of CCD-based digital photography is actually very good (although the amount of light, as you say, is comparatively small). Grandiose (me, talk, contribs) 18:38, 9 August 2011 (UTC)[reply]
I think there is a simpler answer. Every digital camera contains a small computer, which needs power to run. The computer is needed to process the image data and transfer it to storage. Looie496 (talk) 14:44, 9 August 2011 (UTC)[reply]
While that's part of the answer, it's certainly incomplete. Nimur's response may be more 'complicated', but it's also much more accurate. Even if there was no image processing and no energy cost to transfer the image to storage, simply readying the image sensor (whether CCD- or CMOS-based) to collect an image eats up a fair bit of power. The sensor's built-in amplifiers – which are required to produce an image in a reasonable amount of time using a reasonable amount of time – eat up more power. Converting the raw analog signal (essentially the amount of charge stored on each pixel) into a useful digital signal takes analog-to-digital converters, which draw still more power. These steps all happen before you get to the computer.
I suspect that the OP has been exposed to a rather oversimplified description of how the sensor in a digital camera works. Conceptually, it's easy to imagine a pixel as being a little tiny solar cell, with the camera somehow measuring the output current from each one in order to build a picture. While appealing, that sort of description glosses over some very important technical details about how the sensor actually works, and in particular how one can (or cannot) extract energy from it. TenOfAllTrades(talk) 15:07, 9 August 2011 (UTC)[reply]
And to be fair, since the question is really about physics, there's no reason to assume that a digital camera needs to be computerized. It is entirely conceivable that a digital camera might operate without any software or signal processing; a simple digital circuit could directly read out pixel levels and burn them to nonvolatile memory. Trying to build such a goofy contraption using parts that you can buy today might prove impossible, but there isn't any reason it can't be done. If that were attempted, the physicist or engineer building that digital camera would have to start counting photons: for each photon you image, you need at least one more photon to provide the energy for a commission of a bit to nonvolatile storage: an immediate 50% loss of sensor sensitivity. Anyone with a profound interest, academic or professional, in digital cameras, may want to start by reading about CMOS sensors, and a bit of information-theory thermodynamics. If you're interested in that subject, I can dig up a few very nice technical resources for you; here's a simplified presentation-slide-set, Thermodynamics of Analog-to-Digital Conversion from the Stanford VLSI laboratory. Think of it this way: capturing an image is a reversible process; it requires one photon to produce one electron (with a proscribed energy). Saving an image is an irreversible process (in the technical thermodynamic meaning of that phrase). Whether a computer is used or not, saving a photo will require more energy than capturing it. These energy processes are fundamentals of physics, not engineering limitations. Below, some further discussion is presented regarding engineering limitations (number of milliwatts commonly consumed by a sensor, and so forth; which I am not inclined to comment on, because this will vary by orders of magnitude from year to year, sensor to sensor). Nimur (talk) 16:59, 9 August 2011 (UTC)[reply]
The answer is Yes. Digital cameras could be fitted with Solar cells to provide the power they need similarly to a Solar powered calculator. But this would be expensive, inconvenient (large cells) and useless in poor light. Cuddlyable3 (talk) 15:18, 9 August 2011 (UTC)[reply]
Your answer is "yes" to a question that the OP didn't ask. The question was about whether or not a digital camera could use incoming light (that is, through its lens) to operate the camera; the answer to that question is "no", for the reasons thoroughly covered in the preceding answers. As for suggesting that the exterior of the camera body could be covered in solar panels in order to generate power to operate the camera—I'm not sure that you've really thought that one through. Pocket calculators draw tens to hundreds of microwatts of power; digital cameras draw tens to hundreds of milliwatts. When not in use, cameras tend to be stored in cases (or pockets, for compact cameras); when in use, exterior panels would be obstructed by the hands of the photographer. Far more likely to be of use are products like this one, which incorporates a larger panel into the top of a camera bag in order to trickle charge a battery. TenOfAllTrades(talk) 16:02, 9 August 2011 (UTC)[reply]
An imaging lens, rangefinder lens, exposure meter and dial markings (to be legible) can all be parts of a camera that use incoming light, not just the imaging lens. The question "Couldn't..." sets no limits on how the camera apportions incoming light nor how it is designed and used. The electronic design challenge of matching a continuous source of low power to occasional peak power drains is manageable. It might involve power reduction by much slower readout from a Charge-coupled device to the Flash memory or even a novel integration of these two elements. Cuddlyable3 (talk) 22:13, 9 August 2011 (UTC)[reply]
Even my vintage 1970s SLR film camera had a battery, to operate the light meter and help adjust the f-stop. And cameras from that vintage with autofocus also require batteries. The only thing I've got with a tiny solar cell is a calculator. ←Baseball Bugs What's up, Doc? carrots17:37, 9 August 2011 (UTC)[reply]
More practical for a camera then solar would probably be a hand crank generator to charge a battery. I have a flashlight with one of those and the advantage would be you wouldn't need to have light to charge the battery. Googlemeister (talk) 18:38, 9 August 2011 (UTC)[reply]
The disadvantage would be carpal tunnel syndrome. ←Baseball Bugs What's up, Doc? carrots19:00, 9 August 2011 (UTC)[reply]

Astigmatism

[edit]
Text blurred by different focal positions of an astigmatic lens

i get how a strictly nearsighted person would see without his glasses because i am nearsighted, and i get how a strictly farsighted person would see because i assume it is like if i held a book way up to my eyes, except for them it's still blurry at what would be a comfortable reading distance for a normal-vision person. However, I don't understand how someone with astigmatism sees. I read your article but i don;t think i understood it. Thnks for any help. --joey — Preceding unsigned comment added by 50.19.15.136 (talk) 15:37, 9 August 2011 (UTC)[reply]

Did you look at the astigmatism (eye) article, or just the astigmatism article? Anyway, astigmatism really just means a distortion of the lens, and can show up in a wide variety of ways, although some are much more common than others. I am appending an illustration from our article. Looie496 (talk) 15:50, 9 August 2011 (UTC)[reply]
Speaking as an astigmatic, one effect I get is this: if I look at two identical striped patterns, one vertical and one horizontal, without corrective lenses, then the vertical pattern looks more in focus than the horizontal. If I rotate the patterns (or myself) by 90 degrees then the difference is reversed. AndrewWTaylor (talk) 16:30, 9 August 2011 (UTC)[reply]

Interesting that you can make the fuzzy images focus by squinting. μηδείς (talk) 18:28, 9 August 2011 (UTC)[reply]

What you are actually doing is cutting down on the amount of light entering the eye. Then, the brightness of the white is reduced. With a reduction in the contrast between the white and black, the gray area becomes less noticeable when the eye/brain re-adjust your perception to have the white be completely white and the black be completely black. You get the same effect by simply turning down the brightness and increasing the contrast on the image in any old photo editor. (Note: This is why it is much easier to focus a movie projector with a dim bulb than it is to focus a movie projector with a bright bulb - just in case any future projectionists are interested.) -- kainaw 18:58, 9 August 2011 (UTC)[reply]
Are you sure? When, in an otherwise darkened room, I turn down the brightness of my screen, which reduces the contrast much more than does squinting, the unfocussed images do not seem to become more focused. Should the three unfocused images equally resolve according to your theory as the contrast lessons? μηδείς (talk) 22:44, 9 August 2011 (UTC)[reply]
I have astigmatism, it makes streetlights at night look very pretty, like stars on a Christmas card. In the day everything just has a rather pleasing soft edge to it. The world is much prettier with my glasses off. DuncanHill (talk) 22:48, 9 August 2011 (UTC)[reply]
For me (myopic and astigmatic) they look like chrysanthemums. This is particular fun at places like railway stations where you can find a lot of differently coloured lights against an otherwise dark area in a small field of view. {The poster formerly known as 87.81.230.195} 90.197.66.80 (talk) 23:14, 9 August 2011 (UTC)[reply]
When you squint, you reduce the effective aperture of your eyes. This gives a greater depth of field, putting more of the visual field in focus. (You also change the shape of your eyeball slightly, but the effect this has varies depending on what vision problems you have.) --Carnildo (talk) 01:23, 11 August 2011 (UTC)[reply]
I am astigmatic and apparently it means I see circles with straight sides. However, my eyesight has changed over the years and so has the effects. It's not the focus per se that is affected in my case - that is affected now by my presbyopia. --TammyMoet (talk) 12:53, 10 August 2011 (UTC)[reply]

hurricanes in Puerto Rico

[edit]

Been thinking of visiting Puerto Rico next year, but the easiest time for me to get off of work is in early September, which is also the peak of the hurricane season. Obviously nobody can say if there will be a hurricane or not at that time, but how high is the likelihood, and how hard does PR get hit by hurricanes? Beeblebrox (talk) 16:32, 9 August 2011 (UTC)[reply]

North Atlantic tropical cyclone has lots of nice maps and pictures, broken down by month, of when and where hurricanes are most likely to hit. Wikipedia's hurricane people are some of the best writers we have, and the articles here are actually very good. --Jayron32 17:18, 9 August 2011 (UTC)[reply]
I would put the chances of a significant hurricane hitting Puerto Rico during a brief period this September in the vicinity of 1%, or less. It could certainly happen, but I myself would not let my plans be altered by the possibility. Looie496 (talk) 18:27, 9 August 2011 (UTC)[reply]
Subjectively I have to agree with Looie496. Having lived there I pay attention to the news and it seems like the island is targeted maybe once every three years or so. According to this article it is only hit one year in five by a tropical storm. μηδείς (talk) 22:50, 9 August 2011 (UTC)[reply]
Well, that is a lot more than one percent! --Lgriot (talk) 08:05, 10 August 2011 (UTC)[reply]
One year in five, not one September in five. The Atlantic hurricane season is roughly five months long, so the odds of being hit in a given one-week period is about 1 in 5*5*4, or 1%. --Carnildo (talk) 01:26, 11 August 2011 (UTC)[reply]
[2] says that Vieques gets a major hurricane (SSHS Category 3 or greater) approximately every 5.37 years. Titoxd(?!? - cool stuff) 23:12, 9 August 2011 (UTC)[reply]
So the chance of a hurricane hitting next year is 1/5.37 or 0.186. Hurricane season is about 22 weeks long (June to October), so if you're there for one week (and if we assume that each of the 22 weeks is equally likely to be the one when the hurricane hits), then the chance of it being your week is 1/22 or 0.045. Multiplying the two together we get 0.008, or a 0.8% chance that a major hurrican will hit while you're there. If you stay two weeks, the chance doubles to 1.6%. So Looie496's guess of 1% isn't far off. Pais (talk) 08:38, 10 August 2011 (UTC)[reply]

Of course, given that the surest way to make a subway pull into the station is to light a cigarette on the platform, you may wish to keep your planned getaway dates secret so as not to tempt fate. μηδείς (talk) 16:28, 10 August 2011 (UTC)[reply]

That is just observer bias. Googlemeister (talk) 19:31, 10 August 2011 (UTC)[reply]
Pais's numbers may be mathematically correct, but . . .As Beeblebrox noted, the Atlantic hurricane season peaks around September 10th. Check the chart here which shows a substantially high percentage of the hurricanes are clustered from August 20th to October 1st. The distribution is far from evenly spread across the season. Data that talks about hurricanes landing somewhere usually mean that that is where the eye of the hurricane passed over or hit. The most substantial damage, and the highest winds, may well be in that one spot, but a hurricane can seriously affect the weather for hundreds of miles around. Puerto Rico is right at the point where the trade winds come from the Atlantic to the Gulf of Mexico. According to [this] map, Puerto Rico is thus in the middle of the area where a hurricane in September is most likely to track. So, while Puerto Rico (or anywhere else, for that matter) may not take a direct hit, it could well rain there for the whole month as the island is affected by what happens all around it. I'd change my plans just because "unsettled" is not what I want in weather when I vacation. For serious answers, consider going to the Weather Underground's blog, registering (free) and asking your question there. There are a couple of really knowledgeable posters from Puerto Rico who post almost every day during the season. Bielle (talk) 20:20, 10 August 2011 (UTC)[reply]

Compound eye and focus

[edit]

How can the focus be changed in a compound eye (specifically, an apposition eye) ? In bees, for example, they must need a short focus when in the hive, say on the order of mm, while when out looking for flower patches, a much longer focus, maybe on the order of a km, might be needed. So, does each chamber of the compound eye have a different focal length and/or lens shape, allowing them to choose which images they "see" at any point, by ignoring the rest ? StuRat (talk) 17:34, 9 August 2011 (UTC)[reply]

Do bees use sight to find flowers as opposed to scent? Googlemeister (talk) 18:35, 9 August 2011 (UTC)[reply]
They must use at least some combination of the two, if no sight was involved it would make the bee dance rather pointless. The OP may find some interesting reading in Bee learning and communication and may also find some threads to follow on and off Wikipedia. Theres quite a bit there on bee vision. --Jayron32 19:06, 9 August 2011 (UTC)[reply]
Compound eyes cannot change focus. Each eye sees only one pixel. Their acuity is limited by diffraction anyways. Dauto (talk) 20:09, 9 August 2011 (UTC)[reply]
After much digging, because I also wanted to know! Heh
Compound eyes can't change focus at all. But partly a yes to your question. At least from what I can understand. In bees, certain patches of ommatidia on the front are larger and are able to discern more details, analogous to our central vision (same term is used for theirs - fovea). (See Eye Design Book and our own article on Eye) .
Second point: whether they have long range vision or not hasn't really been proven (See The ABC and Xyz of Bee Culture). Same thing with if they can "choose" which images to see. We can only guess. We're just as limited by our own senses after all, and as far as we're concerned, bees are completely alien in terms of perception.
They probably don't though. Fovea or not, compound eyes still will (AFAIK) have very short vision ranges in contrast to camera eyes. Their ability to find flowers at distances probably has more to do with their more or less excellent sense of direction and not on acute vision. The sun + the ability to see polarized light (i.e. actually see the direction of the light) = built-in GPS. (in addition to the ability to see UV, having ocelli, the faster 'framerates' of compound eyes, and probably some other highly specialized weird stuff going on in their brains)
So what we see as the uncanny ability to 'see' food sources kilometers away is most likely observer bias. They do send out scouts at first to search exhaustively for food sources (and they do so with search pattens and all). If one finds some, she then returns and does a dance (and/or leaves an odor trail). Thus later foragers would then seem amazingly far sighted to us to be able to unerringly go from hive to a patch of flowers 10km away. In truth, they can find it not because they see it from afar, but because they already know where it is... returning bees with food will point it out again and again. (Also see Positional Communication and Private Information in Honeybee Foraging Models and Generalisation and Cognitive Abilities in Bee Vision). -- Obsidin Soul 20:13, 9 August 2011 (UTC)[reply]
In looking for the answer I found that bees have simple eyes too, which I found pretty interesting. Those don't seem to have much acuity either though and apparently they use them for flight instead of actually looking at things, but they can focus them a little bit. Recury (talk) 20:18, 9 August 2011 (UTC)[reply]
(ec)I found a rather cute write-up at a quirky site.[3] First it points out the obvious: an ommatidium contains only about 8 rhabdomeres (photoreceptor cells - see here for the fruit fly example). Then it makes a claim I don't really understand well, that the individual facet "acts as a waveguide" if less than 5 microns wide and for some reason this prevents insects from packing the facets more closely. Also remember that the insect has just this tiny area to see in any given direction, so the amount of light must be limiting also. I would guess that a lot of image processing must be going on after the fact (there are different orientations of the rhabdomere membranes, different paths are being compared ... who knows what kind of interferometry like tricks are involved? Given the insect's limited workspace perhaps this is as much like a radio telescope array as an eye.) Wnt (talk) 20:28, 9 August 2011 (UTC)[reply]
Hmmm, this source appears to give the math for the waveguide argument, though I don't see/didn't calculate a number. [4] Wnt (talk) 20:33, 9 August 2011 (UTC)[reply]

Thanks all, for the feedback so far. StuRat (talk) 19:03, 12 August 2011 (UTC)[reply]

Allergy towards x

[edit]

Is any type of allergy possible? (excluding maybe towards inert substances). Why are some allergies more frequent than others? In some cases, it's clear, that a water allergy has to be less common than orange allergy, but why are cases of pork allergy less common than strawberry allergy? — Preceding unsigned comment added by 193.153.125.105 (talk) 23:55, 9 August 2011 (UTC)[reply]

Have you read the articles Allergy and List of allergies? Scientists have discovered a name for dihydrogen monoxide allergy. Cuddlyable3 (talk) 00:34, 10 August 2011 (UTC)[reply]
Incidentally, I read these articles. I don't see whether all allergies are possible, nor why some are more frequent than others. It's clear that exposure to the allergen increases the risk of developing an allergy. However, if many people are exposed to pork, why isn't a pork allergy very common? 193.153.125.105 (talk) 11:16, 10 August 2011 (UTC)[reply]
Allergies are responses to specific allergens. Allergens have to have a complex recognizable shape to which immune cells can react in a key-and-lock type fashion. Since there is a period during their development when immune cells which happen to bear receptors which would react to molecules normally present in the body become deactivated we normally do not develop allergies to molecules generally present in the bloodstream. This prevents autoimmune reactions. The types of molecules we normally develop allergies to will be proteins not normally found in the body, like those found on pollen grains or cat saliva. We don't form allergies to pork because first, we consume it cooked, and hence the proteins are denatured or degraded, and second, because the body builds a tolerance for substances that enter the body by being eaten rather than crossing some other membrane. It would be quite possible to develop an allergy to some other sort of substance associated with live pigs in the same way that human cat allergies are usually a reaction to a substance in cat saliva which is deposited on their hair as they groom themselves.
A standard immune response to water as an allergen is not possible. Water is not a complex enough molecule, and its universal presence means immune cells that could react to it would become desensitized during their early development and that you would be allergic to your own blood. Sensitivity to water on the skin must be mediated by some other cause. μηδείς (talk) 16:20, 10 August 2011 (UTC)[reply]
If being eaten is a way to prevent an allergy, then why are people allergic to a lot of foods such as peanuts, shellfish or in my case pineapple? Googlemeister (talk) 19:30, 10 August 2011 (UTC)[reply]
Eating small bits of a substance is a way of building up a tolerance, not of preventing or immediately curing an allergy. It doesn't mean that eating a whole apple is a good idea for a person who is allergic to apples. The articles aren't particularly well written, but see Immune tolerance and Hyposensitization. My expertise is based on having taken a 400 level course more than a decade ago and being a life-long sufferer. If you have real concerns, see a professional. μηδείς (talk) 21:58, 10 August 2011 (UTC)[reply]
Generally something should be immunogenic to cause an allergy; but a small hapten might cause one if, for example, it reacts with a protein in your skin, creating a new epitope. For example poison ivy uses small urushiols too small to be the target of an antibody, but they are highly reactive and bind to components of the skin. There's even a nickel allergy, though I don't know the details of how it happens offhand. Wnt (talk) 02:46, 11 August 2011 (UTC)[reply]
Interesting. Yes, denaturement of proteins by smaller molecules can certainly cause allergic reactions. My response to poison ivy went from normal as a child to violently allergic as a teen, with one attack resulting in 50% of my body covered in weeping rash. Nowadays I seek immediate medical attention at the first sign of an outbreak. Prednisone is an effective treatment. μηδείς (talk) 19:30, 11 August 2011 (UTC)[reply]