Jump to content

Wikipedia:Reference desk/Archives/Science/2017 May 1

From Wikipedia, the free encyclopedia
Science desk
< April 30 << Apr | May | Jun >> May 2 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 1

[edit]

Are humans diverse enough to have immunity towards all diseases?

[edit]

HIV is a major disease. But humans who survived the bubonic plague are likely to carry the gene that protects them from HIV. And many people in Africa have the sickle cell trait, which can protect against malaria. Though, the cost is that two heterozygotes can always make a sickle cell anemic baby. But by the Punnett square in simple Mendelian genetics, that brings 1/4 chance, so it doesn't seem like a big cost. Are humans diverse enough to withstand any disease? 50.4.236.254 (talk) 01:01, 1 May 2017 (UTC)[reply]

I don't think so, because the great apes have a far greater genetic diversity than we do and yet Ebola mortality rates for gorillas are about 95 percent. So, it's very likely that there exists a virus (on paper at least) that can wipe out 100% of humans. Such a virus may not exist in Nature but it could be manufactured in a bio-weapons lab. Count Iblis (talk) 05:29, 1 May 2017 (UTC)[reply]
As for genetic engineering: Killer mousepox virus raises bioterror fears springs instantly to mind. Of course, this was carried out on a small cohort of subjects. If a similar thing was done on say Smallpox and released, maybe more than one in ten thousand of us would survive (i.e., not 100% but more like >0.001%) and so the next generation would have some resistance. It has been rumored that the Soviet Union once develop such a strain of Smallpox but have not been able to substantiate it. May have been just US propaganda in order that their own bio- terrorism work could get greater government funding.Aspro (talk) 21:04, 1 May 2017 (UTC)[reply]
Except disease agents evolve also to counteract humans attempts (either biological or otherwise) to combat them. See Antimicrobial resistance for example. --Jayron32 12:06, 1 May 2017 (UTC)[reply]
Also, a disease agent with 100% host mortality rate would doom itself (there is a relevant term that I can't recall at the moment). 2606:A000:4C0C:E200:3CF4:5668:5FB:EC43 (talk) 18:25, 1 May 2017 (UTC)[reply]
You may enjoy reading about optimal virulence and the Red_Queen_hypothesis. SemanticMantis (talk) 20:16, 1 May 2017 (UTC)[reply]

Technology for launching satellites vs for ballistic missiles

[edit]

How far is a country of developing ballistic missiles when it can launch successfully satellites into orbit? What makes the former more difficult than the latter? For example, in the case of North Korea, which is able to launch Kwangmyŏngsŏng-4, but seems to have problems with its ballistic technology. --Hofhof (talk) 19:28, 1 May 2017 (UTC)[reply]

Assuming you're referring to ICBMs, the main problem is with extreme temperature associated with reentry. 2606:A000:4C0C:E200:3CF4:5668:5FB:EC43 (talk) 20:00, 1 May 2017 (UTC) -- See also maneuverable reentry vehicle -20:04, 1 May 2017 (UTC)[reply]
Would have thought that the bigger technological challenge is to make the warhead light enough for it to be delivered by a rocket of reasonable size. Kwangmyongsong-4 was estimated to have mass of 200 kilograms, Little Boy that was dropped on Hiroshima came in at about 10 tones. That would require a very large delivery rocket. It take time to develop a physically small and light weight nuclear weapon. If Kim Jong-un wants to have ballistics armed with just conventional explosive – he already has them! But these don't give him the international political negotiating power of a nuclear tipped missile capability. This R&D takes time and is harder than placing a small aluminum box into orbit. Aspro (talk) 20:33, 1 May 2017 (UTC)[reply]
Also, the reentry problem would not apply to a medium-range ballistic missile (MRBM). North Korea has already developed the Pukkuksong-1, successfully launched from a Sinpo-class submarine; put a nuke in that, and you don't need an ICBM. 2606:A000:4C0C:E200:3CF4:5668:5FB:EC43 (talk) 20:54, 1 May 2017 (UTC)[reply]

What was the highest text and/or number information density when the "first computer" and film were invented?

[edit]

Microfilm? Writing really small on a grain of rice? What was the first computer-based information storage/memory/register/whatever technology to surpass these? Sagittarian Milky Way (talk) 23:15, 1 May 2017 (UTC)[reply]

  • I'm not sure what your question is, but the earliest computers generally used the same punched cards used by Unit record equipment (EAM gear), so their storage density as about 80 characters per card, 3000 cards/box. A box was about 12" long, 6" wide, and 3" high. These cards were also stored in much longer (48") steel drawers, 12,000 per drawer. storage facilities in the EAM era were huge, ranging in size up to fairly large warehouses. Denser storage at that time was microfilm, various types of microform, and microdot, but these were not machine-readable and therefore were not really comparable. Of course, we could now use OCR techniques to read microfilm or microdot, but not back then. -Arch dude (talk) 23:50, 1 May 2017 (UTC)[reply]
Well there was Williams tubes and tubes filled with mercury, punched tape and lots more. Aspro (talk) 23:56, 1 May 2017 (UTC)[reply]
  • Williams tubes and mercury delay lines were not persistent:the data was lost when the power was shut off, like DRAM. The densitied were truly bad. Punched paper tape is not as dense as punched cards in bulk, even the fanfold type that was developed quite a bit later than this, and while paper tape on big rolls was almost as dense it has really, really bad access times. Most of the "lots more" was developed after the time the OP specified. -Arch dude (talk) 00:05, 2 May 2017 (UTC)[reply]
  • To answer your second question: let's take the microdot as the baseline. A microdot stored a printed page of information on a dot of roughly 1/100th of an inch in diameter, so 10,000 pages per square inch, with a page of about 5000 characters, So, about 50 MB/in2, but these square inches are on paper of (say) 100 pages per inch of thickness, so call it 5 GB/in3. This is very roughly the density of a DVD (1995). This is not strictly comparable, of course, since The production of 5 GB of data on microdots would have taken months at the time. Computerized production of microdots might have become possible in the late 1960s. These would have been write-once, not read/write. Today, a 256 GB MicroSD is < .01 in3, so 25 TB/in3, or 5000 times denser than microdot. -Arch dude (talk) 00:42, 2 May 2017 (UTC)[reply]
Thanks. Do you also know what the densest way to store information was before the invention of photography? People with good fine motor skills writing with a hair dipped in ink? (not very practical of course) A binary dot code that encodes whole words and phrases to 20 or 25 dot matrices? (painted by monks with microscopes and single-hair paintbrushes of course) Sagittarian Milky Way (talk) 04:32, 2 May 2017 (UTC)[reply]
Are you asking about methods that were actually used, or ones that would have been possible with existing techologies had anyone thought of them? {The poster formerly known as 87.81.230.195} 2.122.60.183 (talk) 06:32, 2 May 2017 (UTC)[reply]
Either would be welcome. (I should've put the monk sentence into small type) Sagittarian Milky Way (talk) 16:38, 2 May 2017 (UTC)[reply]
  • We're not supposed to speculate, but we can put bounds on this. Photography that might be used consistently for image size reduction was available by about 1840. Before that, I think the best choice for manual image reduction was the pantograph, but I do not know if this was ever done. I arbitrarily speculate that smallest unambiguous images would be 100 dpi dot matrices. This is 10,000 bit/in2. I do not know how much data compression was known at the time, but for written text, codebooks were known, and the Royal Navy had a signal book that could be used for this. For text, I speculate that compression of perhaps 8:1 would have been achievable for text in a well-understood domain of discourse. So, 10,000 byte/in2. The material would need to be vellum parchment or something, so thickness also an issue. You lose progressively more knowledge of coding, precision of the pantograph, and possibly thinness of the material as you go back in time. -Arch dude (talk) 16:29, 2 May 2017 (UTC)[reply]
The Intaglio (printmaking) technique can reproduce very fine detail. With a skilled hand and a magnifier you could use it to produce tiny type, similar to modern security printing. But I don't know if this was ever done. Until recently security printing was mostly Guilloché patterns, but perhaps someone had some other reasons to print tiny lettering. ApLundell (talk) 17:40, 2 May 2017 (UTC)[reply]
Spider silk is about 3 micrometers in diameter. Given a very large number of highly detail oriented blind Andean monks or deaf Chinese nuns, you might assemble a quipu that contains - depending on details of dyeing and knot construction - several binary bits per maybe 400 cubic micrometers of space for roughly 1 megabit per cubic inch, I think. Of course, they couldn't read what they were writing - you'd need scanning EM, computers, automation, and a very large number of detail oriented unpaid American interns to do that. Wnt (talk) 15:09, 5 May 2017 (UTC)[reply]