Wikipedia:Reference desk/Archives/Science/2014 October 7

From Wikipedia, the free encyclopedia

Cite error: There are <ref> tags on this page without content in them (see the help page).

Science desk
< October 6 << Sep | October | Nov >> October 8 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 7[edit]

GPS time dilation question (I need an expert in physics or special relativity)[edit]

1. How can we correct the clock of a satellite due to time dilation effect due to its motion relative to the ground when ground-based clocks can be equally considered in need of correction due to their motion relative to the satellite? Both equally valid relative views, according to special relativity, would mean there would be no singular, absolute time dilation to correct, and indeed, the need for correction should entirely cancel out, requiring no action whatsoever.

2. Since everything is relative in special relativity, it is equally valid to consider the Earth to be accelerating toward stationary particles in the upper atmosphere. In that case, time slows down for Earthbound observers. The particles then decay at their usual half-life pace in their stationary reference frame while only a fraction of these half-time passes for the speeding observers on Earth. Then, just as the speeding astronaut in the Twin Paradox returns to find a much older twin, the speeding Earthbound observers would encounter an extremely old population of cosmic ray particles, which means that they should have long since decayed, and should not have been detected.

75.80.145.53 (talk) 03:57, 7 October 2014 (UTC)[reply]

Addressing your first question: don't think in terms of a correction - special relativity provides a translation from one reference frame to another. Neither frame is more "correct" than the other. If we had a clock on a satellite, and a clock at a ground-based laboratory, one of these frames of reference is more useful to us, but it's not any more or less correct than the other.
Addressing your second question: isn't this a homework problem? Every textbook I've ever read that ever talked about muons proffers this conundrum as a homework question. Anyway, if you feel like cheating, here's one answer, Muons, from the website of the Lawrence Berkeley National Laboratory Cosmic Ray Telescope. (Their math, like all math in theoretical physics, is quite sloppy: in particular, they assume the speed of the relativistic muon is c, because it's close enough; and they assume you're smart enough to know what a half-life is and how to use it, even when it's measured in meters. I hate to be the bearer of bad news; but if you aren't, then no amount of explanation is probably going to help). The good news is, they provide a list of good books. If you don't already own a personal stack of physics text-books, those would be a good set to start buying up and reading, so that you can reference them the next time you need to calculate a relativistic equation. I'll throw in another title, Tipler's Modern Physics, which also works out the muon-half-life problem, and is written at a level that's pretty accessible to the average student of physics.
Rephrase this question another way: let's assume you do assume the Earth is moving and the cosmic-ray is stationary. Well, if you work out the math, and you use the Earth as the fast-moving object flying towards a stationary muon, then you need to apply a relativistic length contraction to the Earth. The "very old" muon doesn't have to wait for long before Earth's lithosphere slams into it: because the Earth is hurtling towards the muon at a massive relativistic velocity, Earth (and all the space between the Earth and the muon) gets squashed into a length that's just few meters, and this distance is traversed within just a few half-lifes for the stationary muon. So, it does collide with the Earth's surface, just a lot sooner than we expected, i.e. before it can decay!
Nimur (talk) 05:40, 7 October 2014 (UTC)[reply]
The special-relativistic equivalence of inertial frames is inapplicable here because there's no symmetry between clocks on the ground and clocks in satellites. That symmetry only exists when the objects are moving inertially and gravity can be neglected, and neither of those is true here. Also, they are not symmetric because we care more about time on the ground than time in a satellite. We can choose any coordinate time we want, and it's more convenient to use an atomic clock on the ground as the standard timepiece. To prevent that clock from going out of sync with one in a satellite, you just make the one in the satellite tick at a slightly different rate. By definition, one second is 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom. The ground clock, as the standard timepiece, counts 9,192,631,770 periods before saying one second has passed. The clock in the satellite counts maybe 9,192,631,771 periods (although I think the correction is even smaller than that). This means that the satellite clock is not measuring its own proper time, but that doesn't matter because we don't care about the satellite's proper time, only about having a standard coordinate time everywhere in the vicinity of the Earth for doing GPS calculations.
Even atomic clocks at different locations on the Earth will tick at slightly different rates because of relativistic effects. I don't know how the operators of atomic clocks deal with this in practice, and I don't know how exactly the standard coordinate time is defined. Fortunately, the vast majority of applications don't need enough precision for it to matter. -- BenRG (talk) 05:57, 7 October 2014 (UTC)[reply]
Here's a simplified geometrical analogy. Imagine a straight line with a helix/corkscrew wrapping around it. The straight line is the worldline of the ground-based clock and the helix is the worldline of the satellite. You want to draw tick marks on the line and the helix at regular intervals so that there are the same total number of tick marks on each. It's clear that the marks on the helix will have to be a larger distance apart (as measured along the helix itself) than the marks on the straight line. There's no way to argue the other way around because there's no symmetry—the straight line and the helix are different shapes. -- BenRG (talk) 06:11, 7 October 2014 (UTC)[reply]
The way satellites correct their clocks it is to drive their local oscillator with a coherent phase locked loop whose input is the uplink carrier frequency. This guarantees that the clocks are coherent between the ground and the spacecraft; and they even drive the downlink oscillator off this signal. Plus, because it's coherent, you get a doppler ranger RADAR for free, just by observing error in the downlink carrier wave, which must be at a particular frequency and coherent to the ground-station!
You can read all about the theoretical basis for this in our article on Unified S-band, which was the Apollo-era program that established the technology for all the satellites that followed. That article links to a ton of great links. In specific, GPS satellites (and the Gravity Probe A, and more recent Gravity Probe B), used the exact same 240/221 frequency ratio in their PLL, because the original spacecrafts were built using leftover radios from the manned moon missions. It turns out there's no easier place to buy high-quality, spaceship-ready microwave radios than to scavenge the leftover Apollo-era gear! Reference: PRL 1980, Test of Relativistic Gravitation using a Space-Borne MASER.
Nimur (talk) 06:34, 7 October 2014 (UTC)[reply]
An interesting thing I heard that in the early GPS satellites, the physicists were telling the engineers that they had to have the relativistic correction, but the engineers didn't believe in special relativity. But they convinced the engineers to at least put in an on/off switch for the relativistic correction, and the early GPS satellites had that switch. Bubba73 You talkin' to me? 07:46, 7 October 2014 (UTC)[reply]
You really shouldn't post something like that without a source. This page says Brian Cox told a variant of this story on QI, but with military brass being the disbelievers rather than the engineers. This page says "At the time of launch of the first NTS-2 satellite (June 1977), which contained the first Cesium clock to be placed in orbit, there were some who doubted that relativistic effects were real", without naming anybody. The NTS-2 satellite wasn't a GPS satellite, but could perhaps be called an "early GPS satellite". That page also quotes a primary source confirming that they measured the relativistic effect and then tuned the clock frequency to compensate for it. The frequency was apparently continuously tunable. I imagine they made it that way because it would be crazy not to, and measured the effect size for science. -- BenRG (talk) 17:07, 7 October 2014 (UTC)[reply]
I won't try to confirm or refuted that specific anecdote, but I can at least provide a reference for anyone who is interested in GPS: Bradford Parkinson (the Air Force officer who drove the GPS research and development program in the late 1970s and early 1980s) gave a talk two years ago about his experiences with the project. It's a fascinating primary source from somebody who knows all about the topic. I had the privilege to see this presentation live; and now the video of the presentation is available at no cost to the public: GPS for Humanity. Nimur (talk) 17:38, 7 October 2014 (UTC)[reply]
Yea, Ben, I certainly wouldn't put anything like that in an article and I should have researched it some before I posted it here, instead of relying on my memory. Bubba73 You talkin' to me? 00:13, 8 October 2014 (UTC)[reply]
I guess not everybody will spend two hours researching the history of GPS, (though, you should! It's an excellent example of massive technological and political progress that was enabled by solid principals of mathematical engineering). So, allow me to summarize.
The five major engineering challenges that Col. Parkinson describes are:
  1. CDMA (code division multiple access) - the theory and implementation for this technology were totally new when GPS was invented, even though today it's used everywhere
  2. Radiation-hardening for the spacecrafts' atomic clocks and their electronics, which would be bathed in the Van Allen belt radiation: transistors were pretty new, and vacuum tubes were pretty fragile. Nowadays, specialty aerospace companies know how to do this really well, although it's a little bit expensive - mostly because of non-recurring engineering overhead for low volume parts.
  3. Accurate numerical simulation, and forward-prediction, of GPS satellite orbit ephemerides (accurate to a few parts per million) at a time when the most powerful computers the Air Force could buy still only had a few kilobits of RAM. Nowadays, I can (and do!) compute or store those orbital parameters for every single known launched satellite, plus every single piece of paint-fleck that falls off of these aforementioned satellites, and I can do it all on my handheld touch-screen vector-processing supercomputer. Computers have come a long way!
  4. Building reliable spacecrafts that could be expected to last over a decade, in order to break even on the budget, when no spacecrafts had ever lasted anywhere near that duration. This was hard in 1975, and it's still hard today.
  5. Building cheap ground stations and even cheaper user-equipment, in an era when single-transistor devices were still a real novelty. Nowadays, this isn't a problem - we can (and do!) put a billion transistor computer on a single chip, and sell these as childrens' playthings.
None of the major engineering or political challenges, as enumerated by this guy - who our Wikipedia article credits as the inventor of GPS - have anything to do with the effects of special- or general- relativity. Whatever those effects were - if any exist - they weren't in the top list of stressful project roadblocks; there were no memorable anecdotes about those details.
In fact, a much bigger challenge is that certain private-sector companies want to use the frequency spectrum adjacent to the GPS bands to sell mobile telephone service - because it's a really nice frequency band! - but the FCC has pretty swiftly shut down those efforts. Nimur (talk) 01:13, 8 October 2014 (UTC)[reply]
Isn't this just a matter of precision? If you discount the effects of relativity, you get a position accurate to such-and-such, but factoring in those effects improves precision to ...some degree or other. So, yes, perhaps early GPS units ignored the effect but more modern ones (with additional compute power, memory, or whatever) can take it into account to get better accuracy. SteveBaker (talk) 16:42, 8 October 2014 (UTC)[reply]
The clock skew is far too large to ignore. According to error analysis for the Global Positioning System it's around 38 µs/day, which corresponds to a cumulative error of about 11 km (38 µs · c) per day the system has been in operation. You don't need to understand the theory behind the clock skew, but you need to correct for it. But only the people who design and maintain the satellites need to worry about that. You seem to be talking about relativistic corrections in GPS receivers. I don't think the idea of special relativistic corrections makes any sense in that context, since there was never really a Newtonian theory of light. They are automatically special-relativistic. In principle, they should take gravitational deflection of the light into account, but if they do, the Wikipedia article doesn't mention it. A quick calculation suggests the error from ignoring it would be on the order of 1-10 cm. The receivers definitely have to compensate for atmospheric refraction, and they could treat the gravitational effect as a form of refraction. -- BenRG (talk) 19:53, 8 October 2014 (UTC)[reply]

Last count of Uranus and Neptune's moons to be realistic[edit]

Our article cites Uranus and Neptune have 27 and 14 moon, but [1] why on Orbit article cites Uranus and Neptune have 32 and 18 moons? How many moons do they have the last time scientist counted them? 35 on Uranus is possible? or 21 on Neptune is more believable Or more realistic 38 and 25? Is this possible Uranus and Neptune end up with 60 or possibly 100 moons? If technologies and telescopes get better, could Uranus and Neptune eventually end up to be over 70 moons or the number count will just have to spike suddenly? Because I can see that Jupiter and Saturn can have several hundreds of moons possibly thousands, I bet the number will just spike in the next twenty years. If scientists found 60-70% of the moons can Uranus and Neptune how many are realistic? Several Hundreds? Possibly Thousands?--107.202.105.233 (talk) 06:36, 7 October 2014 (UTC)[reply]

NASA still shows 27 moons of Uranus [2] as our does our article, Moons of Uranus. The graphic in the Orbit article does not list its sources and may simply be incorrect. Rmhermen (talk) 14:02, 7 October 2014 (UTC)[reply]
Note that Jupiter, Saturn, Uranus, and Neptune may all have ring systems. Since rings are composed of millions or billions of small objects, each one of those could be considered a "moon", depending on your definition. StuRat (talk) 20:12, 7 October 2014 (UTC)[reply]
Stu's got the answer, it simply depends on how one defines moon. μηδείς (talk) 20:37, 7 October 2014 (UTC)[reply]
Agreed - but even with a solid lower-limit on size, improving the state of the art for telescopes allows more and more of them to be found. SteveBaker (talk) 16:38, 8 October 2014 (UTC)[reply]

Aurora Borealis[edit]

Resolved

I thought I might have seen the Aurora last night, looking northward from the north edge of Greater London. It was a fairly constant glow (no discernible change over about 2 minutes) close to the horizon that went from orangey-red in the east to greeny-white in the west, albeit with something of a break in between. I don't know too much about the phenomenon, other than it's usually associated with places way to the north of London, so I'm doubting myself and thinking it must have been some freaky lighting effect. Was it? I checked our article and it didn't really help me. --Dweller (talk) 08:55, 7 October 2014 (UTC)[reply]

The last outburst from the sun that might have triggered northern lights visible from London was September 12th or 13th. I can't find any mention of the phenomenon being visible in England last night, so it was probably the atmospheric conditions making town or commercial lighting visible at a distance, reflected from a cloud layer. Sorry to disappoint you. I've only ever seen the lights once here in northern England, but I have a hill blocking the view. Dbfirs 09:10, 7 October 2014 (UTC)[reply]
No worries. Like I said, I was doubtful. But it would have been nice. --Dweller (talk) 09:18, 7 October 2014 (UTC)[reply]
Its not impossible to see Northlight visible even further down south than London but it usually only happens a few times each year and unfortunately cloud overcast will hide it in most cases. Check "Space weather" frequently on sites like spaceweather.com if you want to catch some. --Kharon (talk) 12:29, 7 October 2014 (UTC)[reply]
Thank you. Nice reply. --Dweller (talk) 14:56, 7 October 2014 (UTC)[reply]
At your latitude, a northern lights event will likely appear as a faint, diffuse, and predominately colorless haze on the northern horizon. Even in areas closer to the Arctic Circle, where auroras often radiate from directly overhead and illuminate the night, they usually appear as whitish-green to the naked eye. Imaging equipment can bring out vastly more color and definition in the aurora than what we're able to see, especially in the mid-latitudes (I've photographed the northern lights several times here at 41N in the US, but have yet to actually see them). It would normally take a very formidable solar storm to produce colorful auroras down to the southern UK, and geomagnetic activity has been fairly tame the past several days, so I agree with the above analyses. In extreme historical cases, aurora displays have been observed as far south as the Caribbean Sea, so there's always hope! (Should something like that occur today, nobody would have any electricity necessary to upload their pictures for several years, of course, but you take what you can get.) – Juliancolton | Talk 01:37, 8 October 2014 (UTC)[reply]

Military History: Weapons That Changed Warfare[edit]

Hello, I am user, Mjfantom. I am here to share some great knowledge. Sometimes, in order to find the correct answers, you must search on your own. I have spent some time in doing some serious research on the internet, and I have picked up plenty of information that I needed. As it happens, an explosive substance which is commonly known as gunpowder (black powder) was invented first before the invention of the firearm (gun). It is commonly known that Chinese alchemists did attempt the creation of an elixir of immortality by mixing saltpeter (potassium nitrate), sulfur, and charcoal together. But mixing saltpeter, sulfur, and charcoal together resulted in the creation of an explosive black powder, and that was indeed gunpowder. But when was gunpowder invented? The true answer is the 9th century AD during the post-classical era (Middle Ages). It was not the 8th century, not the 3rd nor 4th century BC, but the 9th century AD. Yes, the Chinese used gunpowder to invent fireworks. When the Chinese realized the potential of gunpowder as a weapon, the Chinese had used gunpowder to invent weapons to use on the battlefield. From Wikipedia's article about Wujing Zongyao: "Gunpowder warfare began in China during the early 10th century". So gunpowder and guns had altered war and military history. Gunpowder warfare began in China during the early 10th century AD for the Chinese were the people who invented gunpowder weapons such as firearms (guns). Note: gunpowder and guns are two different things. The history of guns began in the 10th century AD during the Middle Ages when the Chinese invented a gunpowder weapon such as the fire lance. The fire lance was the world's 1st gun and it is the direct predecessor of all firearms. The fire lance consisted of a gunpowder-packed tube which was attached to the end of a spear. The Chinese also invented the artillery weapon which is known as the cannon in the Middle Ages. The hand cannon was another early firearm and was invented by the Chinese in the 13th century AD during the Middle Ages. In conclusion, the Chinese were the first people to ever invent a gun. Guns were originally invented to fight war because the gun is the weapon which altered warfare and military history, and the Chinese invented the world's first gun, the fire lance, to use on the battlefield. Though these are military history facts, does anyone agree? --Mjfantom (talk) 09:40, 7 October 2014 (UTC)Mjfantom[reply]

Sure, history of gunpowder and early modern warfare. InedibleHulk (talk) 10:31, 7 October 2014 (UTC)[reply]
Early firearms had little more impact than impress enemies not familiar with them. Fire itself was used long befor in wars and advanced bows, crossbows and ballistas where far superior weapons for a long time almost until Repeating rifles and Machine guns where invented. That is likely the reason why there where little to no advances in firearm technology for so many centuries. --Kharon (talk) 13:03, 7 October 2014 (UTC)[reply]
That's not quite true. Bows are much superior to muskets in most performance characteristics (range, rate of fire, accuracy, arguably even penetration). But from a logistics and personell point of view, guns have advantages - ammunition for guns is more compact and easier to transport (albeit sometimes with greater risk), and shooting a gun is a lot easier to learn than shooting a long bow. Its also physically less demanding. I've shot modern 30 pound bows, and drawing and controlling 3 to 6 times that weight is not something I'd look forward to. --Stephan Schulz (talk) 13:20, 7 October 2014 (UTC)[reply]
See also Chinese archery#Decline. Additionally it shure took a long time to train an bow archer but not an crossbow archer. --Kharon (talk) 13:23, 7 October 2014 (UTC)[reply]
Yep - I think that's it. To train someone to use a longbow was incredibly arduous. The law in the UK requiring every adult male to own a longbow and practice with it at least once a week was only repealed in 1960! Although the law was clearly not well enforced in latter years, it's clear that the intent was to have a large pool of expert archers available whenever needed - and the only way to get that was to have them already trained, and practicing frequently for many years. On the other hand, the crossbow made it vastly easier for relatively untrained users to point-and-shoot, but they were complicated and slow to reload. With higher muzzle velocities, guns are even easier to shoot than crossbows (you don't have to lead the target, allow for windage and fall-of-shot to anything like the same degree as with a crossbow). SteveBaker (talk) 16:36, 8 October 2014 (UTC)[reply]
They really get about, those Proto-Indoeuropeans. --ColinFine (talk) 20:43, 9 October 2014 (UTC)[reply]

With all new weapons, it takes time and experience for the owner of the weapons to learn how to use them, and sometimes time to refine the weapon itself into a usable tool. Take the tank for example; the first tanks were physically incapable of being used for Blitzkreig warfare, even if the theory of the armoured division had been thought of at that time. Alansplodge (talk) 12:05, 9 October 2014 (UTC)[reply]

In regards to User:Stephan Schulz's claim about Bows having superior range, rate of fire, accuracy and penetration, only rate of fire is true. Watch this video and fast forward to the 13:38 mark. They do a test comparing various different weapons including a long bow and a matchlock. The matchlock beat the longbow in range, accuracy, and penetration. Contrary to popular belief, the long bow was not used by skirmishers, they were used en masse against bunched up formations because they were too inaccurate to use against single targets. The whole conception of being able to pick off moving targets singly with an arrow shot like Robin Hood only happens in Hollywood movies. User:Kharonis incorrect as the video demonstrates, crossbows are virtually inferior to matchlocks in every respect imaginable including rate of fire. Weaker crossbows with poor range, accuracy and penetration can be reloaded with just your hands, but more powerful crossbows (in other words any crossbow that would be used in battle against armored foes) needed special tools that were cumbersome and slow to use as the video demonstrates. He's also wrong about firearm stagnation, firearms advanced tremendously throughout its history. Matchlocks evolved into flintlocks, then percussion caps were used, then rifled muskets were used, then smokeless powder was developed, then metal cases replaced paper cartridges, then repeating rifles came into use, followed by auto and semi auto firearms. If anything, firearms are stagnating now. The Browning M2 Machinegun has been in use for 100 years. The M1911 pistol is still being used over 100 years after it was designed in 1911. There is some development in regards to caseless ammunition via the LSAT program, but the program has been in development for a long time now with no production firearm built yet, just prototypes. There's also some concepts involving using guided flechettes with target seeking capabilities but I don't think a working prototype has been developed yet. ScienceApe (talk) 18:06, 10 October 2014 (UTC)[reply]

Mutton meat cooling[edit]

Why freshly prepared lamb/mutton meat cools quickly? I've read it may be due to its fat, but not sure. 93.174.25.12 (talk) 13:36, 7 October 2014 (UTC)[reply]

As far as I know, it does not cool particularly quickly. But mutton fat apparently has a higher melting point than other animal fats, so it also turns solid earlier. Many people think it is unpleasant, or at least less pleasant to eat in that state. --Stephan Schulz (talk) 16:38, 7 October 2014 (UTC)[reply]

Battery power supply for DRL?[edit]

According to daytime running lamp, they are powered by the "engine, which in turn requires burning additional fuel". Why not to power DRL lights from rechargeable car battery and thus eliminating negative enviromental impact?--93.174.25.12 (talk) 17:37, 7 October 2014 (UTC)[reply]

It's not clear to me how the running lights would be powered "by the engine" in any different way from normal nighttime lighting. ←Baseball Bugs What's up, Doc? carrots→ 17:45, 7 October 2014 (UTC)[reply]
Recharging the car battery also "requires burning additional fuel". AndyTheGrump (talk) 17:58, 7 October 2014 (UTC)[reply]
Right, everything electric in a car (fans, lights, radio etc) is run off the Alternator. Using more electricity uses more fuel. There_ain't_no_such_thing_as_a_free_lunch... SemanticMantis (talk) 18:49, 7 October 2014 (UTC)[reply]
The running lamps use VERY little power compared to driving your car. Our article suggest between 5 watts to 200 watts. A typical small-car engine is capable of producing around 100 kWatts...but probably makes only about half of that most of the time you're driving. So even in the worst case (a small car engine with the least efficient lights) - you're adding about 0.4% to your fuel consumption by leaving the lights on. In the best case (a large car engine with efficient lights) - the number could easily be 100 times smaller.
Honestly, it's really not worth worrying about. Since the amount of energy it takes to move your car at highway speeds is roughly proportional to the square of the speed, you only have to reduce your speed by a tiny fraction of a mile-per-hour to save vastly more energy than by turning off the running lights. In contrast, the air-conditioner consumes about 5% of the energy in a small car, and under-inflated tires can have a similar impact on mpg...so pay MUCH attention to that!
If running lights improve your odds of avoiding an accident (which is somewhat disputed) by 1%. On average, you'll have one accident of some kind every 500,000 miles. If you swap out your car after 100,000 miles, then you have a 20% chance of having an accident during the life of the car without running lights and a 19% chance with them. So you're reducing your lifetime car-repair costs by about 5% by using running lights. So - what does an average accident cost? Assuming you don't die or get seriously injured...the insurance companies claim it's around $5,000...so running lights (on average) are saving you $250 over the life of the car...not counting loss of life, personal injury, pain, suffering and loss of use of the car while it gets fixed.
If you get 30mpg, and gas is $3.50/gallon then over those same 100,000 miles - you'll spend around $12,000 on gas...at most 0.4% of which is spent on the running lights...so the running lights cost you around $50 in gas. That means that on average, the running lights save you five times more money than they cost to run...not counting injury, death, etc, etc. Of course you're probably only paying a $500 deductible for an accident - but hopefully, your insurance company is taking into account the fact that your car has daytime running lights when figuring out your rates - so you should still save.
But what about carboon footprint? Surely that matters? Well, 17% of the carbon footprint in the life of your car went into it's manufacture...the other 83% in using the car (mostly the gasoline you burn). But if the lack of running lights increase the probability that you'll need to replace all (or even just a part) of your car, then you can see that it doesn't take much probability of wrecking the car - or just replacing a bumper) to push the carbon footprint up by 0.4%. The math is hard to do here - but if you only replace a small part of your car due to DRL-preventable accidents over it's life - then the DRL's will reduce overall carbon footprint rather than increase it.
I'm quite sure my numbers are WAY too approximate to draw solid conclusions - but the bottom line is that DRL's are not a measurable waste of energy - and they ARE a measurable improvement in cost-of-ownership...and perhaps in carbon footprint - and that's why so many modern cars have them.
Don't sweat the small things!
SteveBaker (talk) 16:21, 8 October 2014 (UTC)[reply]

Is E85 a good deal ?[edit]

My car can run on either E85 (85% ethanol) or regular gasoline. E85 cost $2.70 a gallon now, and regular gasoline $3.20 a gallon here. So, which is the better deal ?

Some complicating factors:

1) I'm not quite sure what "regular gasoline" is here in Michigan. I suspect it's 20% ethanol, but I'm not sure. It's 87 octane ((R + M)/2 method), if that helps.

2) Ethanol has less energy per gallon, but I'm not sure how much less.

3) My car is an automatic with a high idle, so it will drive along at maybe 25 mph without my foot on the gas pedal. This is a problem going down at least one hill, where I need to shift to neutral or ride the brake. Will it idle at a slower speed on E85 ? If so, that might be a good thing.

Thanks, StuRat (talk) 19:51, 7 October 2014 (UTC)[reply]

You can do your own homework, right? See Energy_density#Energy_densities_ignoring_external_components, which quantifies how much less energy is in a kg of e85 compared to gasoline. Next, see Gasoline#United_States_of_America, which says you are probably getting E10 at the pump (there is also an energy density listed for that). Also note that E85 says "critics contest the benefits of E85 by focusing on the fact that E85 has 33% less energy content than 100% gasoline (and 30% less than the E10 gasohol blend that is sold by almost all retailers in the US)," which gives you a basic starting point. You can convert to gallons if you want, using mass density of ethanol and gasoline. You could also try to track down the details of mpg on gasoline vs E10 vs E85, but that will be basically the same as the reduction in terms of the energy density (and my claim is confirmed by these sources [3] [4], 15-30% range given in the first link, ~26% reduction on E85 in the latter test). As I see it, E85 will be a better deal than E10, provided the unit cost is less than 70% the cost of E10. SemanticMantis (talk) 21:19, 7 October 2014 (UTC) (p.s. I think you should have your car looked at by a mechanic. "idle" at 25 mph sounds dangerous to me, and can probably be fixed inexpensively.)[reply]
Thanks. Since the price is only 16% less, I guess it's not worth it. StuRat (talk) 03:43, 8 October 2014 (UTC)[reply]
Right, at 30% less it's a clear better value (ignoring other factors), but it still might make sense if your particular car is closer to the 15% reduction in mpg. Some people also like the idea of using fuel made from corn in the USA, but that is contentious, as I note below. SemanticMantis (talk) 14:42, 8 October 2014 (UTC)[reply]
Are you incapable of filling the tank with each and determining the relative milage? μηδείς (talk) 22:01, 7 October 2014 (UTC)[reply]
I could try that, but of course different drives on different days will affect the results. For example, I might get caught in a traffic jam one day and not the other. StuRat (talk) 03:39, 8 October 2014 (UTC)[reply]
But I think you'd like to know in advance, right? I was in Atlanta about 3 weeks ago and had a choice, and my car will take E85. But I went with regular. I don't know which gives you the best milage for the money. But, IIRC, my car manual says that E85 gets about 25% less MPG. Bubba73 You talkin' to me? 03:46, 8 October 2014 (UTC)[reply]
The article E85 is one of the most embarrassingly polemic articles I've seen, and barely mentions Brazil, where the fuel is legal. In the U.S. a 54-cent per gallon tariff is applied to cheap Brazilian ethanol (unlike petroleum), which makes it little more than a curiosity/backup plan.[5] (To be fair, I haven't researched this carefully enough to see how gas taxes affect the value; what is clear from the source is that it is economically viable in Brazil) Wnt (talk) 11:59, 8 October 2014 (UTC)[reply]
It's more economically viable in Brazil because they can grow sugar cane there, and sugar cane can be grown with far less energy inputs (google for BP's involvement in Brazilian biofuels based on cane, as well as at the U. Illinois based on corn and miscanthus). Stu only asked about the "deal", e.g. the price comparisons, so I didn't get into the other issues. If you're interested, I wrote up a bit with several refs on corn subsidies in the USA for a previous question here [6]. The point is, there are taxes on gasoline, there are taxes on imported ethanol, and there are huge subsidies for corn production in the USA. Even making small progress toward sorting that all out in terms of economic,energetic and ecological impacts is not something anyone can do without specialized training and at least a year or so of careful research. There are also serious doubts about the long-term viability of a corn-based biofuel, based on life cycle analysis, see e.g. here [7] [8] here is a similar assessment of lignocellulose ethanol production, which is a rather different thing:[9]. Finally, there's the food vs. fuel debate to be aware of, but that's too far off-topic even for me right now :) SemanticMantis (talk) 14:42, 8 October 2014 (UTC)[reply]
Clearly we need to make biofuels - because they are efficient ways to convert sunlight into a high-energy-density material that you can put into a car - and if you do it right, it's all carbon-neutral. Trouble is that you need the right plant to start with. In the USA, the pressure to do something meant choosing whatever plant we could quickly and easily use - and that was corn. But we know that corn is a terrible choice. So they had to use a bunch of subsidies to make it work. What's needed is a 'reboot' where we stop and take the time to figure out what plant is the best fit here. It's complicated because there is an ikky tangled mess of the cost to make fertilizer, the cost to replant each year, the cost to process into gasoline, etc. I wish there was a way to make this happen, but it requires politicians to make bold choices - and right now, there is zero chance of that happening. SteveBaker (talk) 15:39, 8 October 2014 (UTC)[reply]
Yes, biofuels can in principle be carbon neutral, and even carbon negative. But corn ethanol is still carbon positive, just less-so than fossil fuels. This place is one of the leaders in current research on second-generation biofuels, with plenty of refs for the interested [10]. SemanticMantis (talk) 16:05, 8 October 2014 (UTC)[reply]
From the original question:
  1. In the USA, "regular" gas is typically E20...so yeah.
  2. Lots less! Gasoline gallon equivalent has a really neat table...it takes 1.39 gallons of E85 to produce the same energy as a gallon of straight gasoline...which means that you have to spend ($2.7 x 1.39) = $3.75 on E85 to get the same energy content as a gallon of gasoline at $3.20...so yeah...it's significantly more expensive. Also, your range is significantly shorter - which means you'll waste more gas finding filling stations because you have to fill up more often. Ideally we'd say "oh but E85 is so much better for the environment than E20 - so it's worth spending more"...but as has been said many times and in many places...it's probably not.
  3. I'm not sure that E85 will alter the idle rate - the car's computer will be fritzing with it depending on the output of the O2 sensor and such - so what happens depends a lot on the kind of car you have. But you should probably get someone to adjust your idle...a competent guy should be able to adjust it in a couple of minutes...and the savings on gas will pay for his time in a very short amount of time! :That kind of high idle can be kinda dangerous too. Riding the brake will cause brake fluid to heat up and eventually boil - then you'll have no brakes, and a car that won't stop by itself without them!
You have to be careful running E85 in older cars - it's more acidic than gasoline and can corrode certain materials more quickly than E20, it dissolves some materials like rubber alarmingly quickly - so gaskets can get fritzed. There are also issues with cooling the fuel pump if it's the kind that sits inside the gas tank. Fuel pumps are typically cooled by the gasoline in your tank - but ethanol doesn't conduct heat away as well, so your fuel pump may run hotter and therefore wear out faster.
SteveBaker (talk) 15:39, 8 October 2014 (UTC)[reply]
Ummm, I'm pretty sure I've only seen E10 in the U.S., which is what Common ethanol fuel mixtures says (though E15 is out there somewhere). Until recently a place near me sold straight gasoline with no ethanol (though I assume it had the other anti-knock additive in substantial quantity instead) Wnt (talk) 17:19, 8 October 2014 (UTC)[reply]

Workplace noise reduction[edit]

Besides following the Buy Quiet program, what can builders and mechanics do to reduce workplace noise?
Wavelength (talk) 23:56, 7 October 2014 (UTC)[reply]

Soft surfaces absorb noise rather than reflecting it back into the room...so drapes and other soft wall, floor and ceiling coverings. Workers can wear noise-cancelling headphones. You can also divide the workspace up into smaller "cubicles" and line those with sound-absorbing surfaces so you don't have to hear the noise from everyone else as well as your own noise. SteveBaker (talk) 01:35, 8 October 2014 (UTC
I was about to say essentially the same thing. There are many options for sound absorbing materials: acoustic wall panels, rubberized flooring, wall baffles, ceiling acoustic "hang-downs", sound curtains, acoustic floor underlayment, portable acoustic screens, etc. I could find links to companies, but if you do it, you'll get suppliers in your area. —71.20.250.51 (talk) 01:49, 8 October 2014 (UTC) This site has fairly comprehensive info: [11] 71.20.250.51 (talk) 02:23, 8 October 2014 (UTC)[reply]
One thing to be careful about though, is that many soft surfaces are also flammable, so watch for fire safety violations. StuRat (talk) 03:37, 8 October 2014 (UTC)[reply]

Assuming you're in the US, contact OSHA. μηδείς (talk) 04:33, 8 October 2014 (UTC)[reply]

In many workplaces there is little that can be done to reduce the noise level. However, workers can, and should, do much to protect their hearing with such simple devices as earplugs and acoustic earmuffs. Dolphin (t) 04:48, 8 October 2014 (UTC)[reply]
Writing as an ex noise engineer, I dispute that. The changes required may be expensive but there is almost always a way to reduce noise by 6 dB, although the changes needed may be radical.Greglocock (talk) 05:07, 8 October 2014 (UTC)[reply]
The question is "what can builders and mechanics do" to reduce the noise in their workplaces? The radical and expensive changes to which you refer; can they be done by builders and mechanics?
In many workplaces there are places, activities and items of equipment that are significant sources of noise for a short period of time, after which the situation changes and the noise sources disappear. This is certainly true of the building industry. In these situations there is not sufficient time to design and implement a specialist solution to the noise problem; people working in such areas must rely on acoustic earmuffs and earplugs. Dolphin (t) 11:11, 8 October 2014 (UTC)[reply]
Thequestion says "what can builders and mechanics do to reduce workplace noise", not their workplace noise. 75.41.109.190 (talk) 15:06, 8 October 2014 (UTC)[reply]
In my opening question, I meant their workplace noise; also, the workplace(s) can be either indoors or outdoors.
Wavelength (talk) 16:28, 8 October 2014 (UTC)[reply]
Portable acoustic screens: [12] , [13] , (etc.). See also, my previous link above.  71.20.250.51 (talk) 18:43, 8 October 2014 (UTC)[reply]