Wikipedia:Reference desk/Archives/Science/2013 January 31

From Wikipedia, the free encyclopedia
Science desk
< January 30 << Dec | January | Feb >> February 1 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 31[edit]

I asked this but maybe wasn't clear enough... (3D depth detection)[edit]

I asked this but maybe wasn't clear enough. The human eyes take depth cues in the near field from how far they have to converge (e.g. toward something near your nose). This angle is used by the brain. The 3D cameras I've heard of that use stereoscopy, however, remain fixed in a plane. Why is this? Is there some theoretical reason the cameras shouldn't be on fine servos and also turn, using this convergence information as well? Thanks! 178.48.114.143 (talk) 00:58, 31 January 2013 (UTC)[reply]

Not so much a theoretical reason as a practical reason. The eyes have a region of high acuity -- the fovea -- that is quite small, only about 5 degrees across, actually even less for the highest-acuity portion. This makes convergence necessary in order to get both foveas pointed toward a target of interest. 3D cameras, as I understand it, use CCD arrays that have essentially equal resolution across a much larger portion of space. Looie496 (talk) 01:07, 31 January 2013 (UTC)[reply]
OK, but are you saying it wouldn't even help? If we took a normal 3D camera and was trying to get the depth image in the near field, it wouldn't help our algorithm if we had the right to pick any point, and both cameras would pivot so that point is in the center of their vision, and tell you that angle. That doesn't even help? The algorithm - any algorithm - must be just as happy without that ability? It doesn't give additional information? 178.48.114.143 (talk) 01:17, 31 January 2013 (UTC)[reply]
It might help, but adding more moving parts makes any system less reliable, so there would be a definite downside. You might compare with flight, where bird wings have changeable shapes, and, with a few exceptions, airplanes don't. It does help to have a changeable shape, but the additional complexity brings in new risks. StuRat (talk) 01:22, 31 January 2013 (UTC)[reply]
Okay. Do you think if you have theoretically 'perfect' servos that turn very, very slowly but with complete accuracy..then in this case how much "more detail" (theoretically) can we gain? For example, if parallel lenses can resolve to 1 mm accuracy at a 10 centimeter distance, then would adding an exact angle to converge on a pixel (and still knowing distance between lenses) increase this to 0.1 mm or anything like that? Sorry, I don't know that much about optics, just curious! I'm curious about the theoretics here. 178.48.114.143 (talk) 02:14, 31 January 2013 (UTC)[reply]
No change in resolution. All you are doing is cutting off the right side of the picture and adding to the left and thus making part of the image useless for 3D. Unlike an eye, with a camera there is nothing magical about the center spot. --Guy Macon (talk) 02:43, 31 January 2013 (UTC)[reply]
Are you sure? I mean let's imagine that something is directly in front of the left camera, like 10 cm away. Then the right camera has to turn, say, 45 degrees (if it's also 10 cm away): this "45 degrees" would tell you that it's exactly 10 cm away, each cm farther makes the right camera turn a bit less, each cm closer makes it turn a bit more. How are you so sure that ALL of that information is in the basic stereoscopic image without any help from the convergence? I mean I guess I'm asking for kind of like a mathematical argument as to why the convergence wouldn't contain more information... Thanks. 178.48.114.143 (talk) 02:51, 31 January 2013 (UTC)[reply]
I know because optics doesn't work that way. You think that there is some magical property called "convergence" that happens when you change where the camera points, but no such property exists. Similar properties exist, but none of them change when you change where the camera points. Look at any photo. Is the resolution better at the center than it is off-center? No? Then why do you imagine that it would be different if motors aimed the camera? --Guy Macon (talk) 03:01, 31 January 2013 (UTC)[reply]
Hey, sorry if I was unclear. I'm not interested in image quality, but only the quality of "depth" information. I know that my own eyes give a FAR stronger and more accurate depth reading closer to the eyes. But that could be for several reasons, including making use of focus. Is the fact that my eyes converge one of these strong signals or not? I mean, for a depth reading, let's say we are reading a point that is directly in front of the left camera. The basic information is like this [ x ] on the left camera and [x ] on the right camera. How do you know that that is JUST as much information about the depth location of X as if rather than just two bitmaps, we also had an exact angle imperially arrived at, by the two cameras swivelling toward each point they're depth-gauging, and keeping track of the convergent angle? I understand that you are saying there is no extra information there. But could you give a mathematical argument as to why? THa kyou. 178.48.114.143 (talk) 03:45, 31 January 2013 (UTC)[reply]
Guy Macon answer said it all. there is no need for convergence for a camera because it does not have a fovea. Dauto (talk) 04:18, 31 January 2013 (UTC)[reply]
Hmmm, I'm not sure he's right though. It's not just a matter of having two foveas pointing at something - it's a matter of having them aligned. When you focus on something you actually feel that identical images coalesce. This is hardwired in terms of ocular dominance columns, so that imaging from the equivalent regions of the two eyes (and a big chunk of the visual cortex is fovea) are actually right next to each other. I'm not so sure I'm grasping all the nuances either, though. Wnt (talk) 06:19, 31 January 2013 (UTC)[reply]
Remember, he specified a digital camera. Now if he was talking about eyes, this is a completely different story. The visual system does all sorts of tricks like doing some kinds of processing much closer to the eye than you would expect. See Visual system and Filling-in. It wouldn't seem strange if the visual system used information from the aiming and focusing systems as well as from the retina. --Guy Macon (talk) 13:47, 31 January 2013 (UTC)[reply]
Eyes or camera, you can't easily change the angle an image was captured in post production. It's not just a matter of sliding the image a few pixels to the left like the hero from an old 3d comic book.APL (talk) 19:36, 31 January 2013 (UTC)[reply]
You can if the rotation keeps the location of the aperture fixed, and you have enough overscan that you don't end up with a blank region at the edge of the image, and the lens is good enough (or the rotation small enough) that you don't have problems with aberration. Hugin, for example, will do this transformation for you. Changing the apparent location of the aperture is another matter, but it can actually be done to a limited degree—see 2D to 3D conversion. -- BenRG (talk) 22:19, 31 January 2013 (UTC)[reply]
Are you talking about a camera that takes pictures for later human viewing? If so, the arrangement of the cameras should match the setup of the stereoscopic viewer. If they match, all of the relevant information will be presented to the eyes. They will still be able to cross/uncross to align the two images in the visual field and they will get a depth cue (albeit a fictitious one) from that. The camera doesn't need to know how far away anything is.
If you're talking about automated analysis, you can always shift the image in postprocessing, because of what Looie said: the (angular) resolution is about the same across the whole image. The eyes only need to rotate because they have a fovea.. -- BenRG (talk) 07:08, 31 January 2013 (UTC)[reply]
This isn't true at all. The arrangement of the cameras is often very different than the viewer to give either an exaggerated or muted sense of depth. (Not just for artistic reasons either. We can't percieve much depth beyond about 25ft, so for wide shots they exaggerate it.) APL (talk) 19:36, 31 January 2013 (UTC)[reply]
Yes, you can change the depth; you can also change angles with a fisheye lens or colors with a sepia filter. However, the OP seem(s/ed) to think that convergence-related depth perception would be lost on reproduction unless the 3D camera imitated the human eye in real time, and that's not true. Fixed cameras, and a fixed arrangement of the reproduced images, preserves everything except focus/blur. Focus does need to be adjusted in real time unless you're using one of those crazy new lightfield cameras.
Convergence and refocusing are tied together in human vision, though, so it may be that you have to do both sometimes when filming to avoid a confusing disparity. I don't know. But doing it "perfectly" would actually eliminate both depth cues, so that the object currently in focus would always appear to be the same distance away. -- BenRG (talk) 22:19, 31 January 2013 (UTC)[reply]
I should add a link to parallax, which talks about measuring distances in this manner. The advantage I can see to changing the angle of the cameras is that you would be able to get closer objects in view in both cameras. Of course, being able to move the cameras, while held at the same angle, would work, too.
Also, if you have zoom lenses, being able to aim the camera exactly at the object in question would allow you to zoom in much further, which presumably would increase the accuracy of any measurements. StuRat (talk) 07:14, 31 January 2013 (UTC)[reply]
I added to the title to make it more useful. StuRat (talk) 07:26, 31 January 2013 (UTC) [reply]
Adding convergence to the cameras would mean your eyes didn't nee to converge to see the same thing - which would mean one important source of depth information would be lost to you. In fact more distant things would seem very strange. There's no point doing it twice. Dmcq (talk) 11:30, 31 January 2013 (UTC)[reply]
All the 3D information is available without turning the lenses: let's say the distance between the lenses is 10cm, pixel width is 1000, and the two images start overlapping at 20 cm from the camera; meaning that a point at the right edge of the left image is on the left edge of the right image. Then you know that a difference of 1000 pixels corresponds to 20cm. If you double the distance, the difference will be halfed: 40cm is 500 pixels, 4m is 50pix, 40m is 5pix and so on. Turning one lens won't give you more information, because it simply shifts all the pixel positions to the left or right. The calculations may be more complex than this due to optical distortion and because points to the left or right of center will be closer to one of the lenses than the other, but the principle remains the same. Ssscienccce (talk) 12:49, 31 January 2013 (UTC)[reply]

Wait a moment!

Convergence is an important part of 3d photography. Professional 3d cameras have the ability to adjust the relative angle of the two lenses so that when photographing close-up objects you can use the correct convergence (or "toe-in").
This is a vital feature because if it's not done then the image has to be "Re-converged" in post production which will result in lost information, and possibly distortion. For items close to the cameras, setting proper convergence actually results in the object being seen from a slightly different angles than would be captured by parallel cameras, an important effect which can't be duplicated properly in post-production.
The convergence angle isn't typically changed during a shot (like focus often is), because it can be disorienting, but it wouldn't surprise me if there were some situations where it was acceptable.
All but the cheapest amateur cameras offer this adjustment.
here is an advertisement for a Panasonic camera that makes much of how easy it is to adjust the convergence.
APL (talk) 19:36, 31 January 2013 (UTC)[reply]
It is by no means an established fact that a converged stereo camera arrangement is better than a parallel stereo camera arrangement. See http://vfxio.com/PDFs/Parallel_vs_Converged.pdf http://www.lightillusion.com/stereo_3d_settingconvergence.html and http://www.reduser.net/forum/showthread.php?84639-Camera-Converged-or-Parallel --Guy Macon (talk) 22:49, 31 January 2013 (UTC)[reply]
I'm the OP here and your links - and the present discussion - have nothing to do with my question. The question is on 3d depth DETECTION (like Microsoft Kinect does) and has nothing to do with 3D vision whatsoever, except that a stereo camera can be used if it helps with... 3D detection. The only point is to "scan" (detect) the world in front of the cameras accurately with regard to depths. No relationship with or intention to show this back. Think of a robot trying to grab something. 178.48.114.143 (talk) 03:25, 1 February 2013 (UTC)[reply]
I happen to have been the lead engineer on two robotics projects that used two-camera machine vision to measure distance. One of them even had your arm "trying to grab something." It's a shame that you are not willing to listen to my explanations without telling me that you know more than I do. Oh well. Experience is a harsh schoolteacher, but some people will accept no other. --Guy Macon (talk) 07:42, 1 February 2013 (UTC)[reply]
I may have been hasty. See my other clarification though ("No, there is absolutely no human viewing involved *whatsoever*, nor are there pictures. The computer will take the depth information and further process it. at no point is a human, or any other vision, involved. only the depth is important for us.") As such please understand that I clicked through to ALL THREE of your links. The first link talks about "As we all know, to SEE stereo, we need two different views of a scene". THe whole article is about this, vision and the brain and has pictures of 3D glasses. It has zero information on a computer actually trying to get digitized depth information. The second linked article also has a picture of 3d glasses, has zero information on algorithmically deriving exact depth distances in mm. Everything is about "shots". The third link is the same, and concerns preparing images for human consumption. Please understand that while you might have experience turning stereo images into millimeter-precise depth maps for computer (not human) consumption, you just linked me THREE articles that do not have a word to say on this subject. 178.48.114.143 (talk) 11:58, 1 February 2013 (UTC)[reply]
What I am trying to get you to understand is that they are the exact same problem. It doesn't matter whether the computer is made of meat or silicon. Both computers are trying to calculate depth. Whether the sensor is a digital camera or an eye does make a difference, for reasons we have discussed at length, but the nature of the computer does not make a difference.

Here is what a computer -- meat or silicon -- has to do to calculate distance to a point.

First, it has to locate the point in the visual field of both cameras. If it is looking at a featureless wall or a bank of fog, this may not be possible. If the edges are "soft", the depth estimates of both brains and computers become less precise.

Next, it has to make sure that the two are the same point. The magic eye illusion works by fooling the meat computer, and a magic eye repeating pattern will typically confuse the silicon computer into giving grossly inaccurate distance estimates as well.

There is a trick that the mind uses, and which is often used by single-lens digital cameras, which is to look at what focus settings give the sharpest edges at the point of interest. This isn't very precise, but it will tell the meat/silicon computer that is trying to decide whether something is one foot away or three feet away which is correct.

Finally, the silicon or meat computer determines the distance by geometry. It knows one side (the distance between the two cameras) and two angles of a triangle, which is enough to solve the length of the other two sides. And of course the farther away the point is the worse this works.

Note that none of the above involves any sort of aiming information. You need that with an eye, because you can't locate those sharp edges unless they are at the center of the field. With a digital camera, the resolution is the same across the field, so aiming information adds nothing.

This is why the links I gave you are completely relevant -- it is the same problem whether it is a robot trying to grab something or a 3D movie. Anything that screws up the robot vision will screw up the movie. And aiming is not one of the things that screws up either system --Guy Macon (talk) 19:43, 1 February 2013 (UTC)[reply]

Actually YOU wait a moment!! can't convergence disambiguate? Think of a repeating Magic Eye![edit]

Actually YOU wait a minute!! Can't converence disambiuate?! For example, think of a magic eye pattern repeats. Then it could be interepreted as being one of several "depths" (i.e. take a line of dots . . . . . . . . . . . . . . . and interpret it so that the left and right image are off by 1 dot...but if you interpret that so that the left and right image are off by two dots, you get a different depth. So, suppose the camera has two candidate depths: wouldn't picking a point that is construed as being "the same point" in a candidate and turning each camera to the calculated dept where that point would in the center, actualy tell you whether your candidate was right? Take a simple example: suppose a screen is displaying a repeating Magic Eye for the camera. If the camera is parallel the screen could be 20 inches or 200 inches away and the camera couldn't tell (except by focus tricks). But if the camera moves to convrge the lenses, tehn the convergence for 20 inches (two pictures a hundred dots over should converge) wouldn't work at 200 inches (every second dot should converge). Meaning: the camera would pick a candidate "same" pixel at 20 inches, turn both cameras to it - but find that it wasn't the same physical spot! In fact it was two different physical spots 200 inches away that the camera misinterpreted as being 20 inches away. Therefore, turning the cameras would in this case add MORE information and REDUCE the possibility of making a consistent depth error in repeating or homogenous images that have few features the cameras can pick up on! What do you think about this reasoning? 178.48.114.143 (talk) 00:40, 1 February 2013 (UTC)[reply]

Further, isn't this, in fact the reason we "know" that magic eyes are "false"! I mean, even if there is nothing else in our frame of vision, just looking up at the sky and a magic eye is in front of us, we know it's not real. Isn't the reason we know it's fake because of physical eye movement convergence that's not happening for the purported depth? A parallel camera simply wouldn't have that information, and be truly tricked by a magic eye if it interpreted it as the the magic eye wishes: it would truly "believe". But, if it converges the angle of the lenses as we humans do, it would know that it is just a picture at a set depth... it wouldn't be tricked by the pattern. Isn't this an example of how actually the physical convergence adds really important up-close depth information? 178.48.114.143 (talk) 00:54, 1 February 2013 (UTC)[reply]
In the infinitesimal-pinhole camera approximation (infinite depth of field), there's a one-to-one correspondence between points on the film and angular direction in the field of view. If you place images of the photographs in the appropriate places in front of the viewer's eyes, the light will enter the pupil at the same angle that it would have if coming from the original scene, and if the color and brightness also match, the result will be indistinguishable from the original scene, even when you rotate your eyes. Although the light is coming from a different distance, you can't tell because it's in-line with the original position. If you make the cameras converge as you suggest, but don't reverse the effect on projection, it will actually erase the distance information, since the matching parts of the displayed images will then always be in the same place.
There is a small movement of the pupil when you turn your eyes and hence a small parallax effect, but I don't think it's used as a depth cue. I can't find any mention of it in a quick web search—there the parallax between the two eyes, parallax from a moving subject, and parallax from moving your head, but that's all. If it does matter then your auto-converging cameras might make sense, but only if the projected images are moved too so that the angles remain correct when the eyes converge on the chosen subject. -- BenRG (talk) 01:47, 1 February 2013 (UTC)[reply]
Just to be clear, are we clear that while converging the camera you know the exact movement angle thru a high resolution reading of the servo motor movement doing so? That is the whole point. so, if you converge the cameras by pointing them nearly at each other, then you might have erased the distance information, but you now ALSO know the 2 servo angles. it's triangulated like this: / \ and from the two angles and distance between cameras (the known vals) you can calciulate where the beams cross. this is my idea... i dont see thus how depth information can ever 'be erased' through convergence, unless you dont keep track of the convergence angles! - which is the whole point! 178.48.114.143 (talk) 02:26, 1 February 2013 (UTC)[reply]
Well, I don't understand what you're doing with this information. Are these photographs for human viewing? Are there servos on the viewing device rotating the picture in the same way? If so, what I said above about the pinhole camera still applies - you don't lose the depth information but you don't gain anything either. -- BenRG (talk) 07:16, 1 February 2013 (UTC)[reply]
No, there is absolutely no human viewing involved *whatsoever*, nor are there pictures. The computer will take the depth information and further process it. at no point is a human, or any other vision, involved. the depth is only important for an algorithm (for the computer). no viewing is involved. 178.48.114.143 (talk) 11:51, 1 February 2013 (UTC)[reply]
Oh, okay. If your camera works like the human eye, with most of the pixels in a small region in the center and those pixels wired directly to the subsequent processing stages, then you would need to rotate it just to see the object in any detail. If it works like a typical digital camera with a constant pixel density across the sensor, then you get no additional information (about the center of the image, anyway) by rotating the camera while keeping the aperture fixed. If you move the aperture you do get additional information, but it doesn't matter whether you rotate the camera or just move it sideways. -- BenRG (talk) 18:50, 1 February 2013 (UTC)[reply]
...but it might matter from an efficiency perspective: it could be cheaper/easier to rotate the camera than to do the equivalent transformation in software. -- BenRG (talk) 19:23, 1 February 2013 (UTC)[reply]

I feel like I'm programmed to be racist[edit]

So, today a Pakistani guy I went through school with sent me a text saying he was in town, and we arranged to meet up for a chat about life and old times. 15 minutes later I arrived in the city center and met him. He was with a large group of Pakistani and Indian guys, and I instantly felt uneasy, or suspicious or something. I don't even know how to describe it. A negative feeling telling me to beware of these people. I was reluctant to approach them or to appear part of their group, even though they greeted me warmly and made some short small talk. I was kicking myself inside and telling myself these are just more of my fellow human beings. Luckily they were going a different way and my friend left them and came with me into a cafe for a chat. Turns out his group of friends were a college cricket team.

I commonly feel this way around people who are not white, even close friends. I suppress this emotion because I am against racism, but I am intruiged that I feel it. I don't feel this way with groups of white people, I am white. I don't even feel it around even the most foreign white people, from eastern Europe or Russia or wherever. Can anyone explain this obviously biological basis for racism to me? I have heard that people with recessive appearance traits are more prone to racism. I have many of those, blue eyes for example.--Whichwayto (talk) 01:01, 31 January 2013 (UTC)[reply]

let's keep to referenced answers
The following discussion has been closed. Please do not modify it.
Can't say anything about those last two sentences. As for the rest, this probably relates to our tribal past. People from different tribes might have had different appearances, and, as these tribes were frequently hostile to one another, avoiding people who look different was a good way to stay safe. StuRat (talk) 01:25, 31 January 2013 (UTC)[reply]
Do you/other responders ever experience this? I feel it quite often...--Whichwayto (talk) 01:27, 31 January 2013 (UTC)[reply]
Yep. I once sat at an empty table in a cafeteria. Then a black guy sat down, no problem. Then 4 more black guys sat down. Then the entire table was occupied by black guys, except for me. I wasn't quite sure what to do. I felt like maybe I should leave, as I was intruding on their table, but I didn't want to seem racist, so I just ate my meal and left. StuRat (talk) 01:30, 31 January 2013 (UTC)[reply]
Happens to me too. I just ignore it as one more leftover from our prehistoric ancestors --Guy Macon (talk) 02:46, 31 January 2013 (UTC)[reply]

This is the wikipedia reference desk. We can't comment on your personal feelings. μηδείς (talk) 03:48, 31 January 2013 (UTC)[reply]

The OP's second question isn't really an appropriate question for a reference desk, but the original question asking about the biological basis for racism is. I can't easily find much in the way of scientific inquiry into biological reasons for why people are racist (or ethnocentric in general), but there is a little; see Racism#Evolutionary theories about the origins of racism. Red Act (talk) 04:26, 31 January 2013 (UTC)[reply]
For somewhat similar behavior in non-human animals, in which preferential treatment is given towards individuals that are identified as being more closely related, see kin recognition and kin selection, although those generally pertain to closer levels of kinship than racial divisions. See also Evolutionary psychology#Family and kin. Red Act (talk) 06:15, 31 January 2013 (UTC)[reply]
Depending on where you live, there may be some modern basis for your feelings. In predominatly white countries, where the dark skinned folks are a very small minority, history has resulted in them leading a very hard life assocaited with crime and alcohol and some group-based dislike for whites. But a lot of racism and racist feelings is simply due to the human fear of the unknown. Initially I too (a white) felt uncomfortable when with a group of dark skinned indigenous guys, but having some sustained close contact with them in the course of my career, and formed friendships, as one does, I no longer feel that way with indigenous folk, whether I know them or not. Wickwack 120.145.13.170 (talk) 08:14, 31 January 2013 (UTC)[reply]

Re: "This is the wikipedia reference desk. We can't comment on your personal feelings", Yes we can, in certain situations. "Some questions may demand a broad range of skills and knowledge; it is still helpful to contribute from your areas of personal expertise." --WP:RD/G. Xenophobia is a legitimate area of psychology to ask a question about, and it is acceptable to answer an implied "am I the only one" question with a couple of data points based upon editors personal experience. Care must be taken, though, to avoid any personal opinions or editorializing.

To get back to the original question. I know of no evidence that people with recessive appearance traits are more prone to racism, and no convincing evidence that racism is genetic. It sounds like you might have heard a fourth-hand description that started with the issues discussed in this Psychology Today column. --Guy Macon (talk) 11:43, 31 January 2013 (UTC)[reply]

Wouldn't it be convenient? "Oh, you can't help being racist - it's a natural genetic feature of people like you!" Fighting one form of genetic essentialism with another. I agree, Guy - this is a clearly false suggestion. AlexTiefling (talk) 11:48, 31 January 2013 (UTC)[reply]
In general, a "nature vs. nurture" question in matters of personality and behavior is a false dichotomy; personality and behavior are determined by a mixture of both. See Nature versus nurture#Personality traits, Personality psychology#Biopsychological theories and Epigenetics in psychology. In the absence of any evidence that a tendency toward racist thinking is different from other aspects of a person's personality, the null hypothesis ought to be that racism is neither 100% due to a person's genes, nor 100% due to a person's environment. Red Act (talk) 17:21, 31 January 2013 (UTC)[reply]
Much of it is your brain coming to terms with unfamiliarity. Unfamiliarity is bound to make you uncomfortable. When I was posted to West Africa I suddenly changed from a nearly entirely white environment to an entirely black skinned environment (there were no other Europeans in the company I worked in and my wife stayed in the UK for the first year). For the first six months I had vivid dreams about all the friends and work colleagues I had out there but in the dreams they were all the same people but morphed into looking Caucasian. On some weird subconscious level it took six months to come to terms with the fact people looking so different could in fact be so similar. It took years to feel completely relaxed. --BozMo talk 11:54, 31 January 2013 (UTC)[reply]
These feelings aren't necessarily "racism" (although they might be). Think carefully: Would you feel just as intimidated if you were suddenly surrounded by (say) a loud, excited group of (caucasian) Frenchmen? If so, then that would suggest that the appropriate word might be "xenophobia" rather than "racism". What about if you were suddenly surrounded by a group of homeless 'panhandlers'? I'm sure you can imagine situations where groups of strangers (albeit friendly and inclusive groups) would intimidate you. I know that I get that uncomfortable 'outsider' feeling amongst a group of very religious people - or a group of sports fanatics - or pretty much any group which I don't feel that I'm a member of. Cast in that light, we're talking about a more general feeling of discomfort when you're in a group with whom you have little in common. I'm not sure this helps - but you should at least think about that. SteveBaker (talk) 14:10, 31 January 2013 (UTC)[reply]
Let's not dig ourselves into a definition of 'racism' that relies on accepting outmoded ideas of 'race'. Bigotry against others on the basis of their ethnicity can be productively defined as racism. AlexTiefling (talk) 14:15, 31 January 2013 (UTC)[reply]
There is a cultural predisposition that correlates to identity. Emotions can be strongly influenced by cultural cues. Why do some cultures enjoy dog meat in their diet and other cultures find the thought of eating dog meat repulsive? Cultures can clash. I don't think there is much of a point to being overly concerned with mere feelings of discomfort as these are merely correlatives of emotions generated by cultural identities. Racism is when one actively takes these naturally occurring emotions and builds upon them to perpetrate negative actions against a target group of an identity other than one's own. Bus stop (talk) 15:00, 31 January 2013 (UTC)[reply]
I don't think it's controversial to say that xenophobia can offer an evolutionary advantage: outsiders can bring disease and compete for resources. It is however controversial to say that racism is in our genes, and AlexTiefling's reaction illistrates that point.
Eugenics' history is a sordid one, and racism is clearly harmful to society as a whole. This may mean that xenophobia has become maladaptive, a trait that puts you at an evolutionary disadvantage.
There are plenty of examples of aggresive behaviour towards "strangers" in the animal world, from bees to chimpanzees. There are also examples where a lack of "xenophobia" proves advantageous. The Argentine ant is very successful in the regions where it's introduced, wiping out other ant species and the native insects. This in contrast with colonies in it's native South America, where the colonies are smaller and other species can compete. The colonies in Australia, Asia, Europe and North america show much less genetic diversity than the native ones: other ant species will kill an ant from another colony, but the introduced Argentine ants wont; in essence, the colonies outside South America form one big super colony of ants, that do not compete or fight with each other, and that has ranked them among the world's 100 worst animal invaders.
But I don't think a topic like racism as a genetic trait can be objectively discussed nowadays, and probably not for a long time. Some subjects are taboo: a genetic cause of racism, comparing intelligence of different races, criminality and race... Any study claiming results on such a topic would be discredited, rejected and refuted, because accepting them would open a Pandora's box we might not be able to close again. Ssscienccce (talk) 15:23, 31 January 2013 (UTC)[reply]

RF interference - update[edit]

About 2 weeks ago I was asking about interference between a wireless mike and some wireless devices. It turned out that one of the other devices had gone bad - not interference from he mike. Bubba73 You talkin' to me? 03:12, 31 January 2013 (UTC)[reply]

OK, thanks for the update. StuRat (talk) 03:14, 31 January 2013 (UTC)[reply]
That's great! Your report is appreciated. It is unfortunately quite rare that we ever get any report or thanks, but when we do, it makes us feel good. Wickwack 120.145.13.170 (talk) 08:04, 31 January 2013 (UTC)[reply]
Resolved

Bubba73 You talkin' to me? 14:37, 31 January 2013 (UTC)[reply]

the temperature of nucleon[edit]

can we calculate or obtain any temperature to nucleon?--> really has core of atoms any temperature?==> then some thing about same objects :had early big bang matter temperature?!!!!--Akbarmohammadzade (talk) 06:30, 31 January 2013 (UTC)[reply]

No, speaking of the temperature of a single particle in isolation is nonsensical. At the individual particle level, you'd be dealing with the kinetic energy of the particle. It is only "in the bulk" that temperature becomes a meaningful concept. The Boltzmann constant (or its cousin the Gas constant) does relate the energy of an individual particle to temperature, but conceptually temperature is really hard to make sense on the individual particle level. Temperature is really only meaningfully defined in the context of the transfer of thermal energy (see also zeroth law of thermodynamics) between actual objects composed of many particles, an individual particle should transfer energy to another individual particle not unlike a billiard ball would transfer energy to another billiard ball, so it really isn't useful to bring temperature into it. Though, again, by the simple math of the Boltzmann constant, you could convert the kinetic energy of any particle into a temperature. --Jayron32 06:42, 31 January 2013 (UTC)[reply]
I assume nucleons have a temperature in a quark-gluon plasma. I also suspect collective nuclear motion sheds some light on this, but I shouldn't pretend to understand much. What is the mechanism (if any) of thermal equilibrium between the motion of nucleons and the motion of electrons...? Wnt (talk) 17:52, 31 January 2013 (UTC)[reply]
A quark-gluon plasma doesn't have nucleons (with clear boundaries) in it, but it definitely has a temperature. As a rule of thumb anything that emits light has a temperature and vice versa. A stable atom in the ground state will never emit light, meaning it's effectively a system at absolute zero (unless the proton is unstable). -- BenRG (talk) 21:48, 31 January 2013 (UTC)[reply]
You should stress 'effectively', beause strictly speaking, that statement would be incorrect. Plasmic Physics (talk) 04:37, 1 February 2013 (UTC)[reply]
Hmmm... do higher energy nuclear isomers have temperature based on their ability to emit gamma rays? What about one like 180m Ta that hasn't yet been observed to decay? Wnt (talk) 17:53, 1 February 2013 (UTC)[reply]

natural elements[edit]

Back in my day, they said that there were 92 naturally occurring elements. List_of_elements_by_stability_of_isotopes says in two different places):

  • the 94 naturally-occurring elements
  • All elements to element 98 are found in nature

So is it 92, 94, or 98? Bubba73 You talkin' to me? 06:43, 31 January 2013 (UTC)[reply]

It's all going to depend on precisely how you define "naturally occurring". For example, you could say that any element ever generated in nature counts, even if it's only theoretically created during a supernova and lasts for a trillionth of a second. In this case, the number of elements will be higher. If you only count elements actually detected here on Earth, then the number will be lower. StuRat (talk) 07:04, 31 January 2013 (UTC)[reply]
(edit conflict) "Naturally occurring" elements is a bit of a fuzzy idea. The better concept is to speak in terms of Primordial nuclides, which are nuclides of elements that have existed on earth since its formation, and not merely as transient intermediates in decay pathways. It's these transient intermediates that cause all the discrepancy in counting "natural" elements. The number of elements with at least one primordial nuclide is 84: All the elements through lead (Z=82), less Technetium and Promethium, have at least one truly stable nuclide. Add to this Bismuth, Thorium, Uranium, and Plutonium, none of which have any stable isotopes, but have at least one isotope which is long-lived enough to have existed, on earth, since it was formed. The other elements to take you up to 92 (or 94 or 98) include those elements which are not primordial, yet are still long-lived enough to accumulate in the earth's crust in more-or-less equilibrium conditions; that's where the fuzziness comes in in the counting, trying to figure out which non-primordial elements exist long enough to be said to be truly natural, versus those which are so short-lived as to be transient and not really "natural" even if you could find some traces in the Earth's crust. --Jayron32 07:06, 31 January 2013 (UTC)[reply]
It's 98, namely all elements with Z≤98. I've corrected the contradiction in the article. The confusion probably comes from several isotopes initially "discovered" by synthesis, but later (quite recently, so there are many outdated sources) found to actually occur naturally in miniscule quantities (most in uranium ores). @Jayron: I see no fuzziness in the definition: "some traces in the Earth's crust" logically suffices for being naturally occurring, and that's how it is generally defined (e.g. astatine is generally considered as natural, despite its short half-life and miniscule quantities.) Of course, with better detection methods, the number of known natural elements tends to increase; but so do the primordial elements: Pu was not known to be primordial some time ago. --Roentgenium111 (talk) 14:40, 31 January 2013 (UTC)[reply]
PS @StuRat: The usual definition is "now naturally occurring on Earth" AFAIK, and it's already 98 with this definition. A more general definition would increase the number to 100, since "einsteinium and fermium did occur naturally in the natural nuclear fission reactor at Oklo, but no longer do so." --Roentgenium111 (talk) 14:49, 31 January 2013 (UTC)[reply]
I could probably find the answer to this somewhere, but what about elements that don't have any long-lived isotopes? Would they have been formed in supernovas, even though all of them have decayed by now? Can we say that elements up to some number should have been formed in supernovas? Bubba73 You talkin' to me? 14:44, 31 January 2013 (UTC)[reply]
I don't think any elements beyond Z=100 have been claimed to form in supernovae, but it's an interesting question. "OR": In theory, every artificial element could be created naturally even on Earth (with an extremely low probability) by a natural "extremely lucky" collision of the two isotopes by whose collision it was synthesized. But the probability might be so low e.g. for ununoctium that it may never ever have happened in Earth's history.--Roentgenium111 (talk) 14:56, 31 January 2013 (UTC)[reply]
Actually, I would be surprised if every element weren't created, even very high atomic number elements, in supernovae given the energies and numbers of particles involved. Certainly, if there exists sufficient energy and material here on earth to create them, there exists sufficient energy and material to create such elements in a supernova. Now, they may be created in too small of an amount to be confirmed spectroscopically from Earth, and they're all so short-lived that most won't exist at all past a few minutes or hours after the supernova; but that doesn't mean they didn't exist in the first place. --Jayron32 01:11, 1 February 2013 (UTC)[reply]

The difference between ice and snow?[edit]

Hi there,
I would like to know why is ice stiff, while snow is soft, and why ice transparent and snow atom.
What are the differences in their Chemical structure?
Exx8 (talk) —Preceding undated comment added 09:37, 31 January 2013 (UTC)[reply]

It is the same chemical structure, but snow is in small particles, possibly complex in shape. All the extra surfaces on the snow flake particles reflect the light, so it looks white. See snow. Graeme Bartlett (talk) 10:31, 31 January 2013 (UTC)[reply]
A look at Snow may be helpful. It's got some really cool photos. (Pun intended.) And it's not always soft. It comes in many forms. It's said that the Eskimo have 53 different words for it because of that. Actually, they probably don't, but it highlights the point. HiLo48 (talk) 10:43, 31 January 2013 (UTC)[reply]
Snowclone discusses this. AlexTiefling (talk) 10:58, 31 January 2013 (UTC)[reply]
Eskimo words for snow. CambridgeBayWeather (talk) 00:12, 1 February 2013 (UTC)[reply]
It's not a chemical difference - it's a structural one. Snow is made of ice, but the ice you're familiar with (on frozen ponds, floating in your cocktail, whatever) has been formed into a more-or-less amorphous crystal, shaped like whatever it was in when it froze. This gives it the smooth surface and clear appearance you mention. Snow is formed high up, in the clouds, from tiny, tiny water droplets. This means that the shape of each individual snowflake is determined by the shape of the original microscopic ice crystal that it formed from. If you look at a single snowflake under a microscope, you'll see that it's got a distinctive six-sided shape, and it is transparent, as you would expect an ice crystal to be. The reason snow appears to be white is that each snowflake has dozens of tiny facets, and there are millions of them in any sizeable amount of snow. All those facets reflect light off that doesn't strike them more or less straight on. (Think of looking at a piece of glass at a very low angle - it will appear reflective.) The overall effect is that more or less the whole snowdrift reflects light, which therefore appears white. AlexTiefling (talk) 10:36, 31 January 2013 (UTC)[reply]
Just a correction - snow is not formed from "tiny, tiny water droplets". It forms directly from water vapour (gas) condensing directly to a crystaline solid - there is no liquid water phase involved. Roger (talk) 11:25, 31 January 2013 (UTC)[reply]
Depends how you define "tiny". ←Baseball Bugs What's up, Doc? carrots→ 11:59, 31 January 2013 (UTC)[reply]
(after ec)OK, I'll bite. Given that snow is made of ice, and the triple point of water occurs at a pressure of 0.006 atmospheres, how is it possible for the vapour form to turn directly into the solid form? Surely the pressure throughout the Troposphere is high enough that water has a liquid phase? AlexTiefling (talk) 12:03, 31 January 2013 (UTC)[reply]
It's about partial, not total pressure? Besides, a phase diagram only represents stable equilibriums, effects like nucleation, supercooling etc are not included. Ssscienccce (talk) 15:58, 31 January 2013 (UTC)[reply]
Indeed. Room pressure sublimation and deposition of ice does occur. Leave a tray of ice cubes undisturbed in a closed freezer for a few months. The ice cubes will shrink, and frost will form on the inside walls of the freezer. This happens at below 0 deg C and as such doesn't go through the liquid phase at all. --Jayron32 00:41, 1 February 2013 (UTC)[reply]
To help visualize it, find a nice clear ice cube and dump it in boiling water, then fish it right out with a big spoon. It should have cracks in it now. Each crack reflects light, so the cube looks whiter than before. Snow has a lot more of these reflective surfaces, so is even whiter, at least until the huskies go on it. StuRat (talk) 04:50, 1 February 2013 (UTC)[reply]
Or, take a nice clear piece of glass and pulverize it into dust. You can see through a pane of glass, not so much with the glass dust. --Jayron32 05:00, 1 February 2013 (UTC)[reply]

Electric discharge in vacuum[edit]

Is it possible to discharge electricity through gases in a vacuum? In discharge tube, there is little amount of air inside it so conduction of electricity takes place. If there were no air inside the discharge tube, will the conduction of electricity takes place in the same way as it take place when there is some air? Show your knowledge (talk) 09:56, 31 January 2013 (UTC)[reply]

If there is no gas at all, and it is a vacuum there will be no gas discharge. However there may be electrons emitted from a hot surface, or if illuminated by light. These electrons will move in an electric field. If there is a very little gas, an electric field may accelerate ions or electrons fast and far enough before they collide to ionise further atoms to make more ions, and so conduct. Graeme Bartlett (talk) 10:38, 31 January 2013 (UTC)[reply]
By definition, a true vacuum contains no gasses. Electricity may still be conducted, but not in the same way. See Cathode ray. Someguy1221 (talk) 11:07, 31 January 2013 (UTC)[reply]
And let's make sure that this bit is abundantly clear: electromagnetic waves are not the same as electrons. Electricity is the term we usually use to describe the flow of electric charge. In practice, any time charge flows, an electromagnetic effect also occurs.
So, let's consider all cases:
  • A perfect vacuum, with no electric or electromagnetic effects: this is a valid case.
  • A vacuum, with a beam of electrons flowing in it: this is a valid case, it is a cathode ray in an evacuated chamber.
  • A perfect vacuum, with an electromagnetic wave propagating: this is a valid case. Electromagnetic waves can propagate in a perfect vacuum.
  • A vacuum, with both a flowing beam of electrons, and an electromagnetic wave. This is the most probable case in the real world, because any beam of moving electrons will (self)-interact with its own electromagnetic effects; and to get an electron beam, you often need to apply an external electromagnetic field (possibly one that will propagate as a wave).
Now, when we say "electric discharge," we typically mean corona discharge and/or dielectric breakdown - effects that require neutral atoms that can be ionized. So that brings us finally to our last case:
  • A non-vacuum (often at lower-than-atmospheric pressure), through which charged ion beams and electromagnetic waves can propagate;
    • ...the effects of which sometimes causes dielectric breakdown of the neutral atoms, i.e. "electric discharge."
Nimur (talk) 18:31, 31 January 2013 (UTC)[reply]
Since you used the word "conduction", then the answer is no, since that involves molecules bumping into each other, and there are no molecules in a perfect vacuum. Electrons will still move through a vacuum, but by other processes, such as thermal radiation. StuRat (talk) 04:54, 1 February 2013 (UTC)[reply]
Thermal radiation is electromagnetic radiation, and is NOT a movement of electrons (though it will cause any electrons present to move). StuRat's gone off half cocked again - he didn't even read the article he linked to. also, he didn't adress what the OP was actually asking. While the OP used the word "conduction" trhat StuRat has incorrectly assigned as specific meaning to, it is clear from the OP's question that he meant to ask can electric current flow through a vacuum, and the answer is yes - via a stream of electrons. The elecrons can be provided by thermionic emission and/or secondary emission. Secondary emission can be initiated by tearing out surface electrons from electrodes by a sufficiently high electric field. Electric conduction has many forms - but is not molecules bumping into each other as StuRat knows perfectly well. Are you trolling, StuRat? Floda 124.182.180.216 (talk) 05:40, 1 February 2013 (UTC)[reply]
"Thermal radiation is electromagnetic radiation generated by the thermal motion of charged particles in matter." Electrons are charged particles. If there's a troll here, it's you, and an unregistered troll, using multiple socks, at that. StuRat (talk) 05:49, 1 February 2013 (UTC)[reply]
We seem to be confusing thermal with electric conduction, and electromagnetic with particle radiation here. Thermal radiation is generated by electrons etc. but it is "carried" by photons. A stream of electrons would be an electric current, but would not be thermal conduction. One could call it particle radiation, but it is not, in itself, electromagnetic radiation. Nimur's analysis considered all the possibilities. Dbfirs 09:43, 1 February 2013 (UTC)[reply]
It sounds like somewhere above, thermal emission (of an electron) got confused with thermal radiation (of a photon, e.g. infrared radiation). This subtle terminology difference refers to two totally different effects. Our article uses the terminology "thermionic" to distinguish emission of an electron by thermal processes. That might be archaic, but it's certainly less ambiguous. Nimur (talk) 16:31, 1 February 2013 (UTC)[reply]

Armour and Antiarmour I[edit]

Do you think that armies today can depend only on antiarmour weapons instead of tanks ? 149.200.143.41 (talk) 12:43, 31 January 2013 (UTC)[reply]

It depends too much on the nature of the war that they expect to fight. Some armies are set up against a very specific threat (a belligerent neighbor, for example) - others are set up for total flexibility and the ability to fight any war. If you need flexibility because you don't know who you might be fighting in the next decade or two, then keeping some tanks in your arsenal might make good sense. On the other hand, imagine a country with a mountainous border with a warlike neighbor - for them, tanks are useless - so why have them? SteveBaker (talk) 13:57, 31 January 2013 (UTC)[reply]
No. Tanks are still nearly as much a key part of a major land offensive now as they were in 1939. The role of infantry fighting vehicles has grown considerably since 1939, but IFVs are very vulnerable to tanks, and the best way of dealing with that during offensive operations is... to have your own tanks.
Tanks are very effective against lightly armed forces, thus for example the concern that Gadaffi would use tanks against Benghazi, a fear which prompted the increase in overseas involvement. Small numbers of tanks can easily be neutralised if one has air superiority (as subsequently happened in Libya), but most armies can't count on that. (In the Kosovo War, overwhelming NATO air superiority turned out to be relatively ineffective in destroying heavily dug in and dispersed Serbian armoured forces.)
Anti-armour weapons held by ground forces are more intended to allow infantry to defend themselves from being attacked by tanks, rather than to allow them to take the initiative in attacking and destroying tanks, or to halt a determined advance by an armoured force.
Interesting examples are Singapore, an island nation with no historical involvement in overseas offensive operations; it has highly sophisticated modern weaponry including anti-armour, but still has over a hundred modern main battle tanks (and lots of smaller AFVs and IFVs). By contrast, New Zealand has no tanks at all (it has small numbers of armoured cars), because of its extreme remoteness from possible aggressors (its air force is tiny for the same reason). --Demiurge1000 (talk) 14:35, 31 January 2013 (UTC)[reply]
The UK has recently reduced its main battle tank fleet by 40% to about 200 vehicles, reducing further to under 60 by 2020. Since the end of the Cold War, the UK's military commitments have needed vehicles with a rapid reaction capability that can be easily transported, and which are more use in the counter-insurgency role than heavy tanks. An upgrade programme for the current Challenger 2, "one of the most heavily armoured and best protected tanks in the world" seem to have been put on hold, and a replacement programme called "Future Land Command" appears to exist only on paper. Alansplodge (talk) 16:44, 31 January 2013 (UTC)[reply]
The UK was certainly involved in the tank battles of the Gulf War and Iraq War including 1st Armoured Division (United Kingdom) participating in the Battle of 73 Easting and the Battle of Norfolk with 108 Challenger I's and in action aroumd Basra in the second war with 120 Challenger II's. Rmhermen (talk) 17:31, 31 January 2013 (UTC)[reply]
Agreed, but if I recall correctly, they were less useful in the 2003 Iraq Invasion, especially when the operation in Basra became a counter-insurgency one. Anyhow, HMG has to decide how to spend its rather limited pennies, and they seem to have decided against spending them on tanks, whatever their past record. Alansplodge (talk) 18:22, 31 January 2013 (UTC)[reply]
Tanks were heavily used in the two wars in Iraq and (a few) in Afghanistan even though air superiority and very good anti-tank weapons were available. Rmhermen (talk) 17:55, 31 January 2013 (UTC)[reply]
Although the "air superiority and very good anti-tank weapons" were in the hands of the people who were successful with tanks. In the Gulf War, 180 British tanks destroyed 300 Iraqi tanks without loss. I'm not sure how many tanks were knocked out by Coalition aircraft, but it must have run into thousands. Alansplodge (talk) 21:20, 31 January 2013 (UTC)[reply]
If the "air superiority and very good anti-tank weapons" were sufficient, they shouldn't have needed any tanks. But apparently they did need them. Rmhermen (talk) 21:38, 31 January 2013 (UTC)[reply]
The Iraqis lost a total of 4,000 of 4,230 tanks deployed in the Gulf war, the Coalition lost 4 of 3,360 tanks deployed.[1] Alansplodge (talk) 21:20, 31 January 2013 (UTC)[reply]

Armour and Antiarmour II[edit]

Do you think that antitank weapons had put an end for wars , because tanks are easily destroyed by such weapons ; specially when tanks are used in attack not defense ? — Preceding unsigned comment added by 46.185.254.95 (talk) 14:22, 31 January 2013 (UTC)[reply]

Obviously no, there has not been an end to war. Further, it's not at all apparent that anti-armor weapons are decisively superior to armored vehicles' defenses, nor that it will inevitably become so, nor that such a state would persist forever. The history of warfare is one of measure and countermeasure, not a single final fixed balance. — Lomn 14:38, 31 January 2013 (UTC)[reply]
Advances in armour technology over the last few decades have included Chobham armour, Reactive armour and they're still experimenting with Reactive armour#Electric reactive armor or "smart armour". As Lomn says, whenever a new form of armour is invented, the boffins go back to the drawing board with their warhead designs. Alansplodge (talk) 16:06, 31 January 2013 (UTC)[reply]

But Chobham Armour was defeated by Kornet and Metis-M missiles in lebanon war 2006 isnt that right ? — Preceding unsigned comment added by Tank Designer (talkcontribs) 16:22, 31 January 2013 (UTC)[reply]

Quite right. It's the tandem charge warhead that does the damage. Specifically to counter that threat, British (and presumably Israeli) tanks have since been upgraded with "a new passive armour package, including the use of add-on armour manufactured by Rafael Advanced Defense Systems of Israel". Alansplodge (talk) 17:00, 31 January 2013 (UTC)[reply]
Let's not forget the important role that was played by incredibly huge IEDs in the 2006 Lebanon war. For any large tank ε, there exists a larger explosion δ capable of destroying it. Nimur (talk) 18:36, 31 January 2013 (UTC)[reply]
Israel deployed several hundred tanks during the 2006 Lebanon war. Of these, only about 10% (52 tanks) were significantly damaged (45 tanks by anti-tank guided missiles). Further, only 5 tanks were rated as "destroyed" at the end of conflict. In the end, the IDF considered the performance of their tanks to be satisfactory, even though the opposing forces were believed to have deployed several hundred anti-tank missiles. While it is true that anti-tank weapons can destroy tanks, in the fight between armour and antiarmour, neither side had a decisive advantage in Lebanon. More importantly, the IDF found the threat from antiarmour weapons sufficiently manageable that they were still able to accomplish most their tank objectives, while sustaining what they considered to be acceptable losses. It is possible that other conflicts exist where antitank munitions were so effective that tanks were considered useless, but that wasn't the case in the 2006 Lebanon War. Ultimately, it is all a game of cat and mouse. You build a better antitank missile, then I'll build a better tank. So far, we haven't yet found an end to the cycle of developing measures and countermeasures when it comes to tanks. That doesn't mean though that there couldn't be an end. After all, many forms of warfare (e.g. horse mounted cavalry, trench warfare) have been made essentially obsolete by advancing technologies. Dragons flight (talk) 17:21, 31 January 2013 (UTC)[reply]

Funny taste when reboiling water[edit]

Try this: boil water for making tea, forget to use the water for about an hour, remember you wanted tea and reboil the same water. Now, the tea tastes funny (and the water smells funny, soapy or chlorine-odored if you wish). Any explanation? Gil_mo (talk) 14:41, 31 January 2013 (UTC)[reply]

I have not observed anything like that effect (and the water in the electric kettle frequently gets boiled 3-4 times before it needs to be refilled), so I don't think there's any universal water-specific effect at work. Rather, I'd guess that you've got some particular contamination to your boiling vessel (mineral deposits, whatever) that the water picks up each time it's boiled. That, or it's some sort of observational bias / placebo effect thing. — Lomn 15:07, 31 January 2013 (UTC)[reply]
Might it not be the presence of a distinctive mineral in the unboiled water, that settles out after boiling? Does cold boiled water taste more like freshly boiled water, or reboiled water? AlexTiefling (talk) 15:09, 31 January 2013 (UTC)[reply]
This page and the linked pdf have some potential explanations as well as methods of finding the potential cause, i.e. does this happen just from water boiled in a kettle or if boiled via saucepan too? Jebus989 15:28, 31 January 2013 (UTC)[reply]
It's a very well known English old wives' tale that you make tea with water that has only been boiled once, otherwise the taste will suffer. The reason given is generally that it reduces the amount of oxygen in the water. Sounds a bit improbable to me, but these blog entries[2] [3] [4] repeat the hypothesis. Alansplodge (talk) 15:55, 31 January 2013 (UTC)[reply]
Comment (Attempting to do my bit for the Green movement here) @Lomn said "... the water in the electric kettle frequently gets boiled 3–4 times before it needs to be refilled". Hmmm. Assuming that such water is allowed to cool before re-boiling, I suggest that your method uses more electricity than you would by otherwise filling the kettle with only enough water for each requirement --Senra (talk) 16:14, 31 January 2013 (UTC)[reply]
Mr Splodge, see Wikipedia:Reference desk/Archives/Science/2010 February 23#Making tea with zingless water. -- Jack of Oz [Talk] 06:59, 2 February 2013 (UTC)[reply]
Dissolved bicarbonate minerals could play a role, they are removed by boiling, see hard water. The main problem is finding data about the kinetics instead of just the equilibrium states. Same may be truth for the oxygen, nitrogen, co2 content. Boiling water can't contain them, but things don't happen instantaneous, so time may be a factor. Ssscienccce (talk) 16:55, 31 January 2013 (UTC)[reply]
It would be very easy for Gil mo to do an experiment. Use a glass or porcelain-covered steel container right out of the dishwasher. Fill with distilled water. Pour some into a small glass. Boil once, let cool and fill a second glass. Boil and cool several more times and fill a third glass. Do they taste different? If possible have an assistant mix them up and label the A, B and C to make the test single-blind. --Guy Macon (talk) 17:06, 31 January 2013 (UTC)[reply]
Water from a tap with an aerator tastes better because it has more air (or is it oxygen?) than regular water. Boiled water is deficient in dissolved gasses. Sagittarian Milky Way (talk) 00:40, 1 February 2013 (UTC)[reply]
That's a very good point. Gil mo could add that to the experiment by repeatedly pouring the water from one water glass to another and seeing what difference that makes. --Guy Macon (talk) 06:17, 1 February 2013 (UTC)[reply]
Another possible reason is that each time you boil off some of the water, you concentrate all the salts and other dissolved molecules further. This will make the water taste bad, at some point. StuRat (talk) 05:01, 1 February 2013 (UTC)[reply]
I must confess that I frequently reboil water for tea, coffee, and occassionally other things, but have never noticed this. Water direct from the tap always has a fresher taste, but I've never noticed anything in particular caused by the boiling itself. I have been told that it's important to wash kettles in particular quite regularly, as repeated boiling and refilling causes the mineral content in the water to rise (such minerals being left behind as the water boils away, as Stu said). I've never actually had problems with it, though. Evanh2008 (talk|contribs) 05:10, 1 February 2013 (UTC)[reply]
Does the initial boiling remove previously dissolved oxygen, allowing the subsequent uptake of other gases? I ask this because when I was researching for an article on a shipboard freshwater distiller, the sources stated that the water coming out of the still had to be immediately oxygenated, or it would take up undesirable gases leaving in the water the taste of grease, tar, , rubber, soap, cooking odors, leather, wood, paint or smoke or whatever (seamen?), and would thereafter be so bad tasting as to be practically undrinkable. Might this happen to a lesser degree to water which is merely boiled rather than distilled? It might go from being saturated with tasteless oxygen or nitrogen, to gaining some other odor present in the kitchen. Edison (talk) 16:29, 1 February 2013 (UTC)[reply]
You have seamen in your kitchen ? Are you sure it isn't a galley ? StuRat (talk) 17:11, 1 February 2013 (UTC) [reply]
On the PG Tips FAQ page under "How do I make the perfect cup of tea?" it says you should "use fresh tap water: it contains more oxygen, which makes for a fuller flavour". I don't know how they've come to that conclusion but I would expect that they would at least have run a number blind tastings as they will want their product to taste at its best. Richerman (talk) 10:39, 2 February 2013 (UTC)[reply]
I wouldn't draw that conclusion. Companies are just as likely to just repeat random rumours and 'old wives tales' without considering whether they have any actual evidence for the claims, even their own evidence. Particularly in something like a FAQ which could easily have a low level of quality control. Now if they have a lot of people telling them they shouldn't use fresh water because their tea doesn't taste so good, they may start to consider what evidence they have, but otherwise, it's not like it really matters that much to them. In fact that very FAQ says you should drink 2L of water a day. Yet as our Drinking water says and you can read more about in the references, the actual evidence for this is slim, it's one of the often repeated claims that we aren't even quite sure the origin of. Nil Einne (talk) 11:19, 3 February 2013 (UTC)[reply]

Armour and Antiarmour III[edit]

Do you think that tank ammunition can penetrate the armour that antiarmour cannot ? — Preceding unsigned comment added by Tank Designer (talkcontribs) 17:07, 31 January 2013 (UTC)[reply]

Tanks can fire many types of ammunition, including ammunition specifically intended to defeat armor. As such, a distinction between "anti-armor" and "tank" is flawed. I'm sure that one particular weapon system is most effective against one particular armor system, and I suspect that there are situations where kinetic energy penetrators, which are not generally missile-launched, are superior to chemical penetrators like high-explosive anti-tank warheads. However, tanks can fire either type, anti-tank guns can fire either type, and while missiles are primarily the chemical variety, I would not be surprised to learn of missile-based kinetic penetrator devices. — Lomn 18:43, 31 January 2013 (UTC)[reply]
In that case, you might not be surprised that the Compact Kinetic Energy Missile was cancelled. ~E:74.60.29.141 (talk) 03:41, 1 February 2013 (UTC)[reply]

Turtle survives 30 years in a storage room?[edit]

Hey all, I just read a couple of articles about a reported Red Foot Tortoise that has survived for 30 years locked inside a families storage room of their house. They assume that it must have survived on termites or other insects it found. Is this even possible? Wouldn't it need some sort of water source, or could it possibly live off of the little bit of nutrients found in whatever random bugs it ate? It just seems to hard to believe. Here is the first article i read: [5] I also seen it on yahoo news.Zerojjc (talk) 22:29, 31 January 2013 (UTC)[reply]

I am skeptical. I note that our article on Red-footed tortoise says that they use communal burrows, following one another's scent trails. I would tend to think that maybe a tortoise found a way out of or into that room, and the fact that one was found in there now is either coincidence, or just maybe the legacy of that one escape attempt... Wnt (talk) 22:42, 31 January 2013 (UTC)[reply]