Jump to content

Wikipedia:Reference desk/Archives/Science/2014 February 18

From Wikipedia, the free encyclopedia
Science desk
< February 17 << Jan | February | Mar >> February 19 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 18

[edit]

Can liquids be strained from liquids?

[edit]

We are all familiar with the fact that you can strain solids from liquids, even very small particles if the strainer mesh is fine enough.

But can you strain liquids from liquids? Like lets say you had a mix of oil and water and lets say the oil molecules will bigger than the water molecules: could you then pass the liquid through a strainer so fine that the water molecules would pass through the strainer but the oil molecules would be separated?

Or does this scenario defy the very property of liquids?--Jerk of Thrones (talk) 04:00, 18 February 2014 (UTC)[reply]

Dialysis tubing? And molecular sieves are similar, but more like a selective sponge than a selective filter (though they can be "wrung out", to continue the analogy). DMacks (talk) 05:07, 18 February 2014 (UTC)[reply]
One can use a Conical plate centrifuge ... to separate two liquid phases from each other by means of an enormously high centrifugal force.  ~:71.20.250.51 (talk) 05:58, 18 February 2014 (UTC)[reply]
Parallel plate separators in effect strain oil droplets from water, and are much used in refinery wastewater systems for that very reason. FWiW 67.169.83.209 (talk) 06:03, 18 February 2014 (UTC)[reply]
See also: Centrifugal water–oil separator (however, these are not "strainers").  ~:71.20.250.51 (talk) 06:12, 18 February 2014 (UTC)[reply]
Osmosis is an example of true molecular sieving, though the way it is described sometimes hides this: the effects at this level (thermodynamically driven flows) that tend to obscure the underlying detail. Oil and water is a bad example, because they don't mix at the molecular level, but you can very effectively remove molecules from water by using a semi-permeable membrane (e.g. desalination through reverse osmosis). —Quondum 04:54, 19 February 2014 (UTC)[reply]
Liquid-liquid_extraction? DanielDemaret (talk) 21:38, 19 February 2014 (UTC)[reply]

By a finger

[edit]

What's the approximate diameter of a typical ring finger of an adult man? of an adult woman? Thanks in advance! 67.169.83.209 (talk) 06:11, 18 February 2014 (UTC)[reply]

I doubt there is such a thing. This might help though - Ring_sizer. 196.214.78.114 (talk) 06:39, 18 February 2014 (UTC)[reply]
(Just to clarify what I said above: depending on where one took a sample to get a "typical" diameter, the results could vary greatly. I would expect a sample in Japan to be way different from one in the US.) 196.214.78.114 (talk) 06:47, 18 February 2014 (UTC)[reply]
Taken from http://www.bluenile.com/find-ring-size "Ring Sizes in the US. Women’s rings typically range from size 3 to 9. The most commonly purchased women’s ring sizes at Blue Nile range from size 5 to 7. Size 6 is the most popular ring size.
Men’s rings typically range from size 8 to 14. The most commonly purchased men’s ring sizes at Blue Nile range from size 8 to 10-½. Size 9 is the most popular ring size." 196.214.78.114 (talk) 07:07, 18 February 2014 (UTC)[reply]
Those are quite a bit bigger than I hoped for (my question actually had to do with whether it's possible, in principle, to cut an improvised wedding ring out of an empty cartridge case ;-) but thanks anyway! 67.169.83.209 (talk) 08:46, 18 February 2014 (UTC)[reply]
Well, a wedding ring doesn't have to be a complete circle. (this one or this one, for example) - and you could (with some degree of skill) cut a spiral out of the cartridge case and then twist the ends to increase the radius and wind up with something like this one. I think it can be done, with a little imagination. SteveBaker (talk) 13:34, 18 February 2014 (UTC)[reply]
Thanks! Yes, my character could cut two spirals out of the case and forge the ends of each one together to form a ring. (How come I hadn't thought of that?) 67.169.83.209 (talk) 06:30, 19 February 2014 (UTC)[reply]
(That's why they pay us Ref Desk respondents the big bucks! :-) SteveBaker (talk) 16:41, 21 February 2014 (UTC)[reply]
On second thought, a 12-gauge shotgun shell would be big enough to cut a ring from it without having to resort to these kinds of complex tricks. 24.5.122.13 (talk) 07:13, 22 February 2014 (UTC)[reply]
I once knew someone whose wedding ring was knotwork; the rabbi(?) inspected it carefully to confirm that the knot consisted of a single endless strand. —Tamfang (talk) 20:15, 18 February 2014 (UTC)[reply]
A rabbi? Of what denomination? The construction of the ring matters very little in Orthodox Judaism (the material and the absence of a gem do, though), and other denominations are much less likely to care at all about the particulars of a ring... הסרפד (call me Hasirpad) 01:11, 19 February 2014 (UTC)[reply]

Natural convection article

[edit]

Natural convection contains a "warning" that certain values are wrong. Can someone work out what needs to be done? There is a bit more info at Talk:Natural convection#The values indicated are wrong. Johnuniq (talk) 09:32, 18 February 2014 (UTC)[reply]

Do homeopathy, spiritual healing and other forms of quackery work after all?

[edit]

See here, I watched the documentary yesterday. It seems to me that if the placebo effect has such a profound effect, then even a drug that would by itself (i.e. disregarding the placebo efffect) would make things mildly worse, could still have a net benefit. One can then ask if in the Middle Ages people really did benefit from bloodletting, even for treating anemia. Count Iblis (talk) 12:23, 18 February 2014 (UTC)[reply]

Clearly the placebo effect would result in some of those treatments producing a modest improvement for some people in some diseases. However, using that to suggest that these things actually "worked" is a bit contrary to modern usage of the word for drugs and other medical interventions. When drugs are tested that turn out to be no more effective than a placebo, we say that they don't work - and they won't get regulatory approval even though some patients might be improved by use of them.
Something like bloodletting for anemia would undoubtedly perform worse than placebo - so even less effective than a sugar pill, but whether it would be more or less effective than doing nothing depends a lot on the size of the placebo effect in anemia treatment. Certainly we could imagine that the mere sight of lost blood might possibly cause the body to kick some metabolic pathway to kick into high gear and thereby improve anemia - but it's far from clear that this would work. The placebo effect is a patchy thing - some aspects of treatment (notably, the treatment of pain) is extremely susceptible to the placebo effect - where as others show no placebo effect whatever.
SteveBaker (talk) 13:58, 18 February 2014 (UTC)[reply]
It is worth remembering that there is more than one kind of placebo – it's not always Treatment X versus Sugar Pill – and the nature of the placebo will very strongly influence its apparent potency. When people perceive a treatment as being more involved, that treatment will have a greater apparent effect. An intravenous injection of neutral saline will be a more potent placebo than a plain white sugar pill. (Even if you're just comparing pills, there are studies which have shown that the color of the sugar pill affects its perceived potency and effects. Simply being told that one sugar pill is more expensive than another can make it more effective; see our article on placebo for refs.) It's possible that moderate bloodletting, in many cases, would perform 'better' than sugar pills – particularly in an age before detailed clinical chemistry and robust quantitive measures of health – simply because it was a visibly more-aggressive intervention. TenOfAllTrades(talk) 21:19, 18 February 2014 (UTC)[reply]
Would you drink a homeopathic beer?Zzubnik (talk) 14:06, 18 February 2014 (UTC)[reply]
In medicine, the question usually asked is not "Does this work at all?" but more often "Does this work better or worse than our current best treatment?" On this test, these faux-medicine treatments don't tend to fare well. For specific info on evidence, whether for mainstream or faith-based medicine, I recommend http://sciencebasedmedicine.org - the bloggers there do a good job. Friday (talk) 14:09, 18 February 2014 (UTC)[reply]


Certainly it can be argued that putting homeopathic treatments on the shelves at WalMart and some pharmacies (yes, they really do that!) is beneficial because the placebo effect is real, the product packaging looks convincingly like a real drug - so some people will take the treatment and get somewhat better. There are many problems with doing that though.
  1. These things are (typically) nothing more than tiny bottles of water or sugar or starch pills. The manufacturer is charging $7 to $20 for that. A much cheaper (and equally effective) placebo could easily be made without resorting to all of the bullshit that is attached to homeopathy. A pack of TicTac mints (which look convincingly like pills) costs about a dollar - so why pay $20 for pills that are infused with ground up duck liver diluted in water to the extent that there isn't one single molecule of duck liver left in each pill? It's all about packaging and labeling and being found in the pharmacy section instead of the candy aisle.
  2. The problem that worries me the most is that some people will be persuaded to take the homeopathic placebo instead of taking a real drug. There are essentially no legal limits (in the US, at least) to what the manufacturers claim for these "treatments" - where "real" drugs have to be carefully labelled with what they do and all of their side-effects. Many people who don't have any clue what the word "HOMEOPATHIC" means on the bottle are going to be impressed that this stuff has no side-effects whatever (it's just water, after all) - and be taken in by all of it's claims of effectiveness - and avoid getting proper treatment as a consequence.
  3. Once in a while, a homeopathic treatment (which has no intentionally active ingredients) has some unforseen side effect from it's supposedly inactive ingredients - as attested by the problems with Zycam - which was claimed to be homeopathic - but included some high concentrations of zinc compounds that caused a bunch of people to permanently lose their sense of smell. The lack of legally mandated testing on these products is a dangerous thing!
But the crucial thing here is that the claim that some medical treatment "works" should be taken to mean "works better than placebo" - because that's the standard to which all treatments are held. By that standard, these quack cures don't work at all.
The tricky part of the debate is the mechanism and the ethics of prescribing placebo in cases where mainstream medicine is either ineffective, or unnecessary. It could certainly be argued that if someone goes to a doctor for a mild, non-chronic headache - then it would be better to prescribe a harmless sugar pill than aspirin or tylanol because the sugar pill eliminates a bunch of risks from side-effects - and will probably help the headache reasonably well.
SteveBaker (talk) 14:42, 18 February 2014 (UTC)[reply]
You seem to believe that homeopathic placebos have no negative (=side) effects because "it's just water, after all". By that argument they wouldn't have positive effects either. Placebo side effects do exist; the wikipedia article is nocebo, but it doesn't seem to be very good. -- BenRG (talk) 23:11, 18 February 2014 (UTC)[reply]
It all depends what you mean by "work". Do certain forms of alternative medicine produce cures? Some may do - physiotherapy, Acuscope treatment for example. Do certain forms of alternative medicine make people feel better? Of course, otherwise people wouldn't use them. So in that sense they can be said to "work". But if all the evidence gathered is inadmissible because it arises from personal experiences and not research (which is expensive and tends to be paid for by drug companies anyway), then there will be no evidence that it works. Speaking personally, I believe that hands-on healing works because I personally have benefited from it. But my testimonial is of no use in the scientific world. --TammyMoet (talk) 15:05, 18 February 2014 (UTC)[reply]
For what it's worth, nearly any common treatment modality, mainstream or faith-based, has been tested in proper scientific studies, usually many many times. So, it's not that we _only_ have anecdotes to go on. Friday (talk) 15:10, 18 February 2014 (UTC)[reply]
I generally hold Tammy's view: there's a difference between believing I benefit from something and objective evidence that it benefits me. If every time I drink some Vitameatavegamin my headache goes away, it would be silly to stop drinking it just because it doesn't work better than placebo in double-blind trials. However (1) I don't have any objective reason for recommending it to someone else; (2) my experience doesn't advance medical knowledge at all; and (3) I'm out the money it took to buy it. In a world where people deceive others for profit, and deceive themselves and others to make sense out of their experiences we need these things to be regulated. OldTimeNESter (talk) 20:48, 18 February 2014 (UTC)[reply]
A portion of the placebo effect is measuring bias. If a doctor believes you've gotten a good treatment, he may be more optimistic when he records your condition. So, on paper it may seem as though you've improved more than you really have.
That aspect of the placebo effect sure doesn't help the patient. 75.69.10.209 (talk) 04:11, 20 February 2014 (UTC)[reply]
And that's why reputable studies use double-blind testing: the doctor doesn't know if you've gotten a good treatment or a placebo, either. --Carnildo (talk) 02:12, 21 February 2014 (UTC)[reply]
The sort of thing that told me paracetemol was no use for me but asprin was. I used to take a cold and flu medicine with asprin and a lemon drink but one day I found that it was practically totally ineffective, I hadn't the foggiest what was wrong until I found out they'd just replaced the asprin with paracetemol. If they were placebos they would have worked regardless. Dmcq (talk) 09:02, 21 February 2014 (UTC)[reply]

Wrong Science (Victorian)

[edit]

I'm looking for examples of science "facts" from the Victorian era that were later proved wrong, or theories which were widely accepted but totally wrong in the end. Not pseudo-science, but things like the origin of the Sun's energy (which as I recall was believed to be the energy of gravitational collapse) which were widely accepted by educated scholars of the day. I am not sure if this is really a Humanities question, but I am trying here first. Tdjewell (talk) 14:18, 18 February 2014 (UTC)[reply]

Interesting question. You might want to take a look at Category:Obsolete scientific theories.--Shantavira|feed me 14:25, 18 February 2014 (UTC)[reply]
That's awesome! Just the sort of thing I was looking for. Tdjewell (talk) 14:38, 18 February 2014 (UTC)[reply]
By Victorian times, science was starting to get fairly serious - so the number of utterly wrong "mainstream" theories are relatively few. Go back 100 years before that though - and there were some seriously screwed up ideas in the mainstream. SteveBaker (talk) 14:46, 18 February 2014 (UTC)[reply]
The Victorian era was ~60 years long, and saw quite massive changes in scientific understanding. I think the theory of luminiferous aether could be considered an archetype of widely believed (and reasonable) scientific theories nowadays regarded as refuted. The Michelson–Morley experiment seriously wounded it, and Einstein put the final nail into the coffin. --Stephan Schulz (talk) 15:14, 18 February 2014 (UTC)[reply]
A big one that didn't gain mass acceptance until relatively recently is plate tectonics. Some of the Victorian beliefs (which don't really have a concise name) are discussed at Timeline_of_the_development_of_tectonophysics. One bit that caught my eye was that the best estimate for age of the earth was 20-400 million years old in the 1860s, but by 1900 the estimates were in the ~3 billion year range. SemanticMantis (talk) 15:27, 18 February 2014 (UTC)[reply]
When I was in grade school, I recall that many scientists were still pooh-poohing the notion of Pangaea, even though to the casual observer it was obvious that the continents could fit together like puzzle pieces. Science, like religion, can sometimes let dogma get in the way of observable facts. The difference is that scientists usually come around. ←Baseball Bugs What's up, Doc? carrots15:35, 18 February 2014 (UTC)[reply]
Lamarckism is the idea that acquired characteristics can be passed down, so that a shortnecked mammal who stretches upward to reach a branch might acquire a longer neck, and over generations its descendants might become giraffes. It was a popular theory in the early Victorian era. It had obvious appeal, in that if you worked hard and studied hard then your children would be strong and smart. Phrenology was popular in the early Victorian era: the theory that one could discern someone's mental faculties by feeling of bumps on the skull to determine the underlying brain structure. Until the germ theory of disease was proved by Pasteur, Koch and others in the second half of the 19th century, in the 1870's and 1880's, the Miasma theory was held by many doctors; that outbreaks of disease were caused by bad-smelling vapors blown from swamps or other foul places. It was also the judgement of leading scientists in the 1870's that arc lights were fine for streets or large public building, but the electric light could not be "subdivided" or made small enough to have one in each room of a house, since arc lights were inefficient in small sizes, and incandescent lights burned out too quickly. Then Edison and Swan developed practical incandescent lights by 1880. Edison (talk)
Caloric theory seems to have lasted well into the 19th century. There was also Phlogiston theory, but it appears that that was debunked in the 18th century (in fact Caloric theory partly displaced it). AndrewWTaylor (talk) 16:15, 18 February 2014 (UTC)[reply]


Pellagra is a skin disease caused by a vitamin deficiency, and was common in the 19th century among populations who lived on corn (maize). Throughout the Victorian period and up until the early 20th century, scientists thought it was caused by a toxin in the corn or perhaps by germs, rather than by a deficiency of niacin. There was a very hard battle in the early 20th century to show that it was not a germ disease, with the medical establishment claiming the dietary experiments were "half-baked" and "fraud," so the incorrect germ or toxin theory of pellegra would have been well established in Victoria;'s lifetime. In Victoria's time, there was the notion that people had a certain "station in life" based on Social status such that a lower class family's child should not aspire to get a good education and enter a learned profession. Social stratification was thought of as heriditary. An attempt to "rise above one's station" was likely to result in failure. A more modern view is that there is a wide and overlapping distribution of intelligence among the offspring of the different social strata, although wealth does lend numerous advantages which work against meritocracy and Achieved status. Edison (talk) 16:23, 18 February 2014 (UTC)[reply]
Forgive me, but wasn't this a Star Trek TNG episode, Darmok and Jalad at Pellagra? μηδείς (talk) 05:05, 19 February 2014 (UTC)[reply]

(Aside point) How come Rutherford's model is in the category of obselete science? Is it because the idea doesn't really take into account wave particle duality? I didn't know it was considered obselete! 80.254.147.164 (talk) 17:23, 18 February 2014 (UTC)[reply]

Old scientific theories seldom truly die, they just get .. reinterpreted. Lamarckism becomes epigenetics, the heat of solar collapse is an important step in star formation, Prout's hypothesis becomes proton and neutron theory, luminous aether yields to spacetime. Of course, statements that things are impossible fare much more poorly. Wnt (talk) 19:28, 18 February 2014 (UTC)[reply]
Nonsense, Wnt. Lamarckism was discredited over a century before what is now called epigenetics was formulated. There was no such evolution, just the reuse of a word coined in 1942, the better part of a century after Darwin disproved Lamarck. μηδείς (talk) 22:10, 18 February 2014 (UTC)[reply]
It's definitely not that clear-cut. To begin with, Darwin himself expounded the concept of Gemmule (pangenesis) (which has some things in common with miRNA) which is a method of inheritance of acquired characteristics which is typically what people mean by Lamarckism. (Really, there wasn't much to Lamarckism per se, it was just some vagueish speculation, and you can definitely say Darwin improved it!) But the point is, things dismissed as Lamarckism, such as Lysenkoism (which was really Michurinism, as Lysenko himself called it) cited experiments which in the past decade have actually been confirmed to some degree. Michurin's philosophically inconvenient practice of graft hybridization did in fact produce valuable strains in his day. Of course, at the same time, one can say that Lamarckism was definitively defeated in that organisms aren't trying to enact some cosmic imperative to become more complicated - epigenetic mechanisms are still evolved mechanisms to deal with changing environments in the evolutionary short term (a few generations). Wnt (talk) 04:05, 19 February 2014 (UTC)[reply]
I suppose what I want is for you to be more clear as to the mechanism you are claiming, since DNA methylation, while "acquired" is an acquired state of DNA, just as mutations are acquired, not an acquired trait, and not in any way comparable to acquiring dark skin from one's parents' tan, or a long neck from one's parents' stretching theirs. μηδείς (talk) 05:00, 19 February 2014 (UTC)[reply]
What I suppose you're missing is that CpG dinucleotides are severely depleted in the human genome. They do not occur at the rate one would expect by chance, given the numbers of C's and G's. This is because methylcytosine converts to thymidine in a transition mutation. I think this still has not been explored enough, doubtless in part because it's a rare event that is not that easy to investigate. But the point is, CpG islands are important for gene activity, and the transition from CpG to mCpG occurs within the organism's lifetime in response to circulating factors that include hormones and small RNAs, which affect both the target organ and (potentially) germ lineages. These methylation events occur in association with a wide variety of histone modifications, and some of these events are heritable to the next generation. When the methylcytosine finally mutates, the result may be a permanent heritable reduction in a gene's activity. Potentially there may also be an increased rate of changes in function (if in coding sequence, for example) or localization of the gene. So the body sort of has a way to say "look, this gene isn't working out for me; I'm turning it down for now, but feel free to fiddle around with it". That's getting really close to the original concept of Lamarckism.
Now to be sure, giraffes are a lousy example, chosen to show Lamarckism can't explain everything. Obviously a giraffe can't stretch its neck period - the whole purpose of vertebrae (one purpose) is to accept compressive force as the muscles twiddle the neck this way and that. Now, if you wanted to look at the behavioral propensity of a giraffe to lift up its front legs for a moment, or the rate at which it repairs damaged collagen in its tendons, who knows, you might find something. But of course, as we know, proper Darwinian evolution does occur, and when selecting out an extreme from the preexisting variation it actually occurs pretty quickly, so there's no particular need to construct a story of giraffe evolution based on acquired inheritance. The two mechanisms both exist, and simple selection seems to be by far the most common in long-term directed change. (perhaps excluding Cope's rule? Also, socially important change between a few human generations may be another matter)
Lastly I should point out a phenotypic trait is a measurement - it can be the result of one gene, many genes, partly environmental, epigenetic, etc. Stature for example. A genetic trait is essentially a subset - an observation of a gene. Sometimes a gene is observed coarsely, by measuring something about the product protein (A, B, O blood type), sometimes more precisely (specific alleles of the blood type locus, some of which have weak activity that can lead to reactions against "same type" donors). It really isn't precise until it gets down to the single nucleotide polymorphism level. Traits like that say a lot about genes, inheritance, population biology, not so much about whether someone will get cancer. I'm not sure why you think I misused these terms. Wnt (talk) 20:35, 19 February 2014 (UTC)[reply]
You seem to be making up or quoting some fringe point. Lamarckism is the acquisition by offspring of parentally acquired phenotypic traits. If you want to use some entirely unrelated definition, you should use some neologism to fit it. Lamarckism is well-defined. What the skewing in the numbers of genetic nucleotides has to do with baby giraffes getting darker spots from their parents' sunburns is beyond me. μηδείς (talk) 02:59, 22 February 2014 (UTC)[reply]
More directly, the Rutherford model did not give any structure to the electron cloud, and thus was unable to explain large parts of the behaviour of atoms, from chemical bonding to spectral lines. Similarly, it did not recognise that the nucleus is made up of neutrons and protons. It's not strictly "wrong" (at least in general terms), but it's been superseded by the Bohr model, and then later more complex quantum mechanical models. It's obsolete in the same sense in which a Tin Lizzy is obsolete as a car, or HMS Victory is obsolete as a battleship. --Stephan Schulz (talk) 19:37, 18 February 2014 (UTC)[reply]
Actually all the models that had orbiting charged particles were invalidated by Maxwell's equations but I'm not sure if they had a good replacement so it seems they ignored it. --DHeyward (talk) 20:34, 21 February 2014 (UTC)[reply]

Isaac Newton also had some weird alchemy theories that actually turned into real science (it's buried in either the main article or sub-articles). --DHeyward (talk) 20:34, 21 February 2014 (UTC)[reply]

Medieval dome construction method

[edit]

I saw a PBS TV show on the construction methods of the dome for the Florence Cathedral. They mentioned that it wouldn't have been practical to build a wooden frame beneath the dome, to hold it in place until completed, due to the volume it would need to fill. They then described how it was likely constructed without a frame.

But I had an idea for another way it could have been constructed (I'm not saying it was, just that it could have). Beneath the dome, on the floor, they could build a horizontal wheel. On that wheel they could construct a wooden arch form (not a dome), up to the height they wanted for the dome. They could then build a masonry arch above the wooden arch. This would become the first part of the dome. Once set, they would then lower the wooden arch form slightly (pull out some shims), rotate it, raise it back up, then build the next masonry arch above that. Repeat this process until the dome is completed.

So:

1) Is this method practical ?

2) Was this method ever used ?

3) Is there a name for this method ?

StuRat (talk) 21:04, 18 February 2014 (UTC)[reply]

My guesses for 1,2,3 are all "I don't think so". That dome is a cloister vault. Unlike the diagram at that article, the one in Florence has an octagonal cross section. So, your hypothetical arch form would have to change shape if you wanted to use it outside of the 4 special directions. Florence_Cathedral#Dome has a lot of detail, and mentions a whole system of chains that was used to reduce hoop stress in the absence of buttresses. My point is, the chain thing wouldn't really work if you tried to build one arch or "rib" of the cloister at a time. Also, it is not clear to me that the 8-radial-part "skeleton" dome would be structurally stable enough to hold together while you were spending decades filling in the gaps (let alone the first single arch!). Our article dome doesn't give much of the statics involved, but arch has some force diagrams. One "key" issue is that the whole assembly isn't really viable until the whole thing is complete, including the keystone. This is especially true in roman arches and spherical domes, but it still an issue in e.g. gothic arches and cloister vaults. SemanticMantis (talk) 21:38, 18 February 2014 (UTC)[reply]
I think you misunderstood me. I don't mean to use my method to create that particular style of dome, but rather a uniform dome, perhaps still made of bricks, thicker on the sides and thinner at the top. I just happened to be watching that show when I thought of it.
As far as it not being viable until complete, that's normally true of a dome or arch, yes. That's why I would wait until one section (arch) was set before moving on. That section should then stand on it's own. StuRat (talk) 23:49, 18 February 2014 (UTC)[reply]
Ok, let's focus on spherical domes. I still don't think it will very well for cathedral scales, but it would probably be fine for say, a domed garage or something. Here's an important point: a circular arch made of masonry is not very strong or stable. I mean here simple arch on its own, not embedded in a larger masonry structure. You can put a bigger circular arch into a thicker wall than a thinner one, and you can put a bigger arch into a thin wall than you can build on its own. There are all sorts of buckling/up/out forces that get directed into the mass of the wall. Consider that you can't even build a simple circular arch out of dry stone (with no wall around it). Even if all the stones are perfectly cut and fitted, their own weight will cause the arch to buckle and collapse (usually upward and outward at the ~10 and 2 o'clock directions is where the failure starts). You can build a circular arch with well-cut stone and mortar or some other adhesive at the joints. Many standard statics textbooks discuss this issue, but I don't have any specific recommendations. If your plan were to work at all, I think it would do best for a catenary dome. At least a catenary arch can support itself without side reinforcement, so you'd have a little more stability while you're filling in the other arches. But don't take my word for it. This stuff is tricky. Modern architects will use very rigorous physics engines and CAD programs to predict forces. Part of the genius of Filippo_Brunelleschi is that he was able to successfully do this stuff at all, without our modern understanding of force distribution via accurate predictive models. I guess I was a little unclear whether you were proposing this as a construction technique of the Renaissance, the modern day, or some other period. I guessed not modern, because we just don't build things like that anymore. Stone and labor are too expensive, and steel and glass and modern scaffolding can make any kind of dome you want. SemanticMantis (talk) 17:38, 19 February 2014 (UTC)[reply]
Interesting comment about the dry stone. It does make me wonder... if you had the right kind (shape) of fine sand, and a 3D printer like apparatus rigged to carefully select and deposit individual grains according to very precise measurement with computer controlled finesse ... could you make a dome out of only dry sand, so that when you tapped on it with your finger it would fall down and pour freely through an hourglass? Wnt (talk) 21:35, 19 February 2014 (UTC)[reply]
Yes, Brunelleschi apparently made a scale model beforehand, although that was a circular dome, not octagonal, but it did use the same herringbone brick pattern used on the final construction, which apparently is what allowed it to support it's own weight during construction. StuRat (talk) 18:35, 19 February 2014 (UTC)[reply]
Also, building a giant "turntable" that is almost perfectly level, moves freely, anchors well, and can support a significant portion of a cathedral's weight seems...challenging, to say the least :) SemanticMantis (talk) 21:42, 18 February 2014 (UTC)[reply]
One arch of the dome seems like a rather small portion of the total weight of the cathedral, to me. And, based on the arch shape, much of the weight should be supported by the sides, even during construction. The wooden form would just be for the excess weight, and perhaps also provide a work platform. StuRat (talk) 23:49, 18 February 2014 (UTC)[reply]
It seems to me that the problem with StuRat's proposal is that the center "keystone" of the initial arch has to also be the center stone for the completed dome. I could easily imagine that a stone that's light enough to be freely supported at the top of a fairly thin arch would not be heavy enough to provide lateral forces when the entire dome is pushing inwards onto it. Therefore it's not absolutely obvious that this approach would work. But it's hard to show that it definitely wouldn't work. SteveBaker (talk) 20:58, 20 February 2014 (UTC)[reply]
Perhaps you could use a temporary light keystone, later to be replaced by the final keystone ? The swap operation would be tricky, but some type of brace could be used to hold it in place during the swap.StuRat (talk) 14:11, 22 February 2014 (UTC)[reply]