Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Wikipedia:Reference desk/Archives/Science/2011 April 29

From Wikipedia, the free encyclopedia
Science desk
< April 28 << Mar | April | May >> April 30 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 29

[edit]

nervous system

[edit]

carries impulses to inboluntary muscles?

See nervous system. -- kainaw 01:26, 29 April 2011 (UTC)[reply]
...and involuntary muscle. StuRat (talk) 04:27, 29 April 2011 (UTC)[reply]
Autonomic nervous system is probably the article that is most directly relevant; also enteric nervous system. Looie496 (talk) 15:24, 29 April 2011 (UTC)[reply]

SF "sensors" and today's technology

[edit]

I've always enjoyed the deliberately vague "sensors" of Star Trek, which seem able to pick-up just about any sort of information the crew needs (planet population, messages, mineralogy, etc.) UNLESS it's an emergency or convenient to the plot for them to stop functioning. I was wondering this morning about mankind's current "sensor" ability. Assuming a ship in Earth orbit approximately as far away as that of the moon, packed with the best "sensor" technology available to our species, what details of Earth would we be able to measure? What if we were in solar orbit out near Jupiter and trying to observe the Earth? The Masked Booby (talk) 01:35, 29 April 2011 (UTC)[reply]

I'd look into multispectral imaging and terahertz - especially tunable terahertz masers. The potential of the latter is as yet unknown but could be incredibly creepy, powerful, and prone to abuse. Wnt (talk) 01:48, 29 April 2011 (UTC)[reply]
As a basic physics-imposed limit, start with angular resolution. Roughly speaking, the smallest features that can be resolved will be separated by a distance (on the order) of the wavelength used to probe, multiplied by the distance to the object, divided by the width of the sensor aperture. For example, a one-meter optical telescope working at 500 nm wavelength (blue-green) and 400,000 km from Earth (about the distance to the Moon) will be able to resolve features down to approximately 200 meters—that's good for counting stadiums, not the people in them. (Incidentally, this is one of the reasons why we can't point the Hubble telescope at the Moon and look at the Apollo landing sites; we didn't leave anything large enough behind.) If the ship is a full kilometer across and combines information from multiple sensors across the hull, then the resolution improves, down to a theoretical 20 cm. You can count heads, but not recognize faces. On the other hand, if you move to a longer, more penetrating wavelengths (say, terahertz radiation, per Wnt's comment above) then you're working at wavelengths of 0.1 to 1 millimeter. Your resolution for a ship-width sensor array at the distance of the Moon is something like 400 kilometers. (Ouch.) Note that all those numbers improve by a factor of 10 or so if you're in geosynchronous orbit instead of the Moon's distance—and they all get much worse if you move out to, say, Jupiter's orbit. TenOfAllTrades(talk) 03:14, 29 April 2011 (UTC)[reply]
Of course, if you don't need instantaneous measurements, you can integrate your sensor readings over a long period of time to winnow down the minimum resolvable feature. Superresolution explains how to beat the angular resolution limit - but it's not magic, it's just interpolation. If you could take tens of thousands of "photographs," and have a little luck with dithering, your angular resolution limit might improve by a factor of 10x. Of course, the longer you spend imaging a target, the more that the target motion is going to cause motion blur. (This is sort of like how a photographer can adjust the f-stop and shutter speed - you can't win both! As you approach the physical limitations of your equipment, you either get blur because of depth-of-field, or blur because of subject motion). Nonetheless, you can superresolve any signal, whether it's optical, radio, or "whatever." I will also point out the oft-forgotten effect of cloud-cover. Just because "Google Maps satellite view" shows you the best commercially-available photographs aggregated over the entire world doesn't mean that any satellite can get that great of a view at any time. (Besides, much of the Google data-set is actually aerial, not satellite, imagery). Point is, if your sky is overcast, your space-sensor can't see the ground. If your sky is "a little hazy," your space sensor can't see the ground. And even if it's a perfectly sunny day with not a cloud in sight... if your sky has air in it, atmospheric distortion will muck up the "perfect" theoretical resolution of any sensor. Nimur (talk) 03:34, 29 April 2011 (UTC)[reply]
I never assumed those sensors actually counted individual people. For example to get an idea of the population of a planet, you'd just have to measure things like carbon dioxide levels, technology (nuclear etc), methane from farms, output from power stations, number of large cities, etc, gather up enough of these kinds of "rough" measurements and I bet you could extrapolate a pretty good guess without having to actually count heads. I just imagined all the more esoteric figures from those sensors were extrapolated this way rather then actually measured directly. Sort of like how we find extra solar planets today, no one has actually "seen one", we've only detected them indirectly. Vespine (talk) 04:48, 29 April 2011 (UTC)[reply]
I think most of us are picturing a craft in low Earth orbit - maybe around 400 km from the planet. (Perhaps the original Star Trek artwork looks more like 4000 km) Many of the old sci-fi stories would talk about the ship deploying one or more "sensor probes" - though it may be a stretch to assume they had interferometry in mind! Still, it is not necessary for the ship to literally be 1 km (or larger) in size for it to cast a web of sensors. I don't think it's unreasonable to say, by your calculation, 1 mm x (400 km/400m) gets you a 1-meter resolution from low orbit with a 400m boom, tethered outrigger, or probe. Of course, it is also necessary to detect the signal, mandating that the 100W emitted by the human body, spread over 4 pi (400 km)(400 km) = 2 x 10^12 m^2, is detectable. Now I see SQUIDs can detect 5 attoteslas, and I think there's some relationship with RF detection[1] but I couldn't say how to relate that to this kind of measurement. Wnt (talk) 07:27, 29 April 2011 (UTC)[reply]

SHM

[edit]

when x=0 , t=0 , v= -ve then phase angle is equal to _________? —Preceding unsigned comment added by Tashsidhu007 (talkcontribs) 04:37, 29 April 2011 (UTC)[reply]

I hope you realize how little sense this makes. Your subject, SHM, is entirely ambiguous, and the initial condition you've given is meaningless without context. For your next homework assignment, please be more clear, and, also, don't ask here. We don't do homework. — gogobera (talk) 04:46, 29 April 2011 (UTC)[reply]
Comment - if you've been indoctrinated with a conventional physics education, you recognize SHM as simple harmonic motion and realize that the question actually provides all the needed information to solve in conventional notation. So, to be fair, there was no ambiguity, just an assumption of common notation. But, we still won't do your homework! Nimur (talk) 14:32, 29 April 2011 (UTC)[reply]
I assume he means simple harmonic motion of a spring at its average length which is getting smaller. But all the definitions are arbitrary. Assuming the x = the sin of some hypothetical angle then the angle is 180 degrees (because x is zero and decreasing) ... but it's truly arbitrary without proper definitions. Wnt (talk) 07:32, 29 April 2011 (UTC)[reply]
Okay, but what's the "-ve" ? – b_jonas 11:17, 1 May 2011 (UTC)[reply]
It is an unspecified constant. The phase angle may be represented in terms of that unspecified constant, and the natural frequency of the system, (also an unspecified constant, from the information given). The standard solution is provided in our article. Nimur (talk) 17:10, 1 May 2011 (UTC)[reply]

Blowing up a tornado

[edit]

We've had a question Wikipedia:Reference desk/Archives/Science/2007 October 10#What would a nuke do to a Tornado? Now that might work, but it has a few problems. But it still leaves open the question: what would it take to make a successful military attack on a tornado? Now at a forum I found a nice demonstration that a simple house fire won't bother the tornado.[2] That shouldn't be a surprise, first because the fire is relatively gradual and weak, second because a tornado fundamentally is still a phenomenon with rising low pressure air in the middle, so fire shouldn't bother that. Now I know that a tornado is a fearsome phenomenon with a lot of energy in it --- still, it is by nature a transient phenomenon, which makes me think that it might be tampered with by much lower orders of magnitude of energy. One ref. I found claimed 3 trillion joules for a F3 tornado[3] - about 3/4 of a kiloton. So if there is a way to get substantial mechanical leverage, it might just be conceivable that the equivalent of few dozen tons of flammable material (TNT isn't that special) could actually be transformative.

I'd welcome answers at various orders of magnitude, but to provide a reference standard: suppose a terrorist is driving a standard commercial tanker truck strapped with explosives on the main road out of his hometown, only to find a tornado coming the other way. He waits until it is about to strike the truck, then sets it off. Suppose the truck contains either (a) gasoline or (b) liquid nitrogen. Is there a chance that the sudden explosion and the net force upward or downward can break up the tornado or break it away from the ground, and tomorrow we'll be reading "Hero Terrorist Saves Town"? Wnt (talk) 07:06, 29 April 2011 (UTC)[reply]

I think the idea of using explosives is a bit silly, but a more practical method might be to place lots of man-made lakes around areas you want to protect. Passing tornadoes suck up the heavy water, and that dissipates a great deal of energy. After the tornado breaks up, the water falls as rain. Now, lakes just for tornado prevention might not go over very well, but they can also be recreational lakes, water supply reservoirs, or potentially they can be used as a way to store energy generated by solar and wind power (by raising the water level). Another option is to plant many trees, as smashing down trees also dissipates energy. However, this can also add lethal debris to the tornado, and leaves a mess behind after. StuRat (talk) 08:38, 29 April 2011 (UTC)[reply]
(ec) Perhaps it could be easier to find out how to guide the way of a tornado and/or fix it to one location where it can't do much damage. Ideally there would be a specialized wind turbine (on the ground, with vertical rotation axis) in that place to take advantage of the energy. So what makes a tornado move around? 93.132.171.155 (talk) 08:48, 29 April 2011 (UTC)[reply]
Clearly tornadoes are powered by the storms above, and must move with them; however, there is no guarantee that the storm's energy needs to reach the ground. Originally I was going to suggest a third truck filled with sand, or perhaps (liquid) polyurethane foam insulation - but I'm not sure that the "tornadic" variety of waterspouts are really reduced that much in magnitude. Particles suspended in them must experience friction, and perhaps cause some downdraft, but they're generally far out to the edge (as shown in the photos in the article) rather than in the center. I'm not sure if pushing downward near the edge would really hurt the tornado - might even help it stay stuck to the ground? Wnt (talk) 20:58, 29 April 2011 (UTC)[reply]
There's no evidence to suggest sporadic bodies of water will do anything to disintegrate or weaken a tornado in the long-term. Once it reaches dry land, it has the immediate potential to stabilize and intensify to its previous velocity. Planting trees to try to stop a tornado would be equally ineffective. Trees in tornado-prone areas are maybe 100 feet tall, whereas mesocyclonic supercells may extend 50,000 feet into the atmosphere, and even a moderate tornado (EF2–EF3) will have no trouble snapping the majority of trees in its direct path. There's really nothing we can build physically to weaken tornadoes. Juliancolton (talk) 15:03, 30 April 2011 (UTC)[reply]
The original question seems to be in regards to taking out a tornado which has already formed. There are two main ingredients to the sustenance of a significant tornado: Convective Available Potential Energy (CAPE) and wind shear. (This is of course an over-simplification; dozens of different parameters can make or break a particular tornadic situation, but these are the main players.) It is the abundance of CAPE that allows parcels of air in and around the tornado to accelerate as they rise, thus drawing near-surface air inwards faster to fill the low-pressure void left behind by the rising air. The influence of wind shear is two-fold: you need deep-layer shear, or a substantial difference in the wind speed and/or direction between the lower- and upper-troposphere. This ensures that sustained updrafts (due to the abundance of CAPE) will be tilted, thus rain and hail which form as the warm, moist updraft air rises and cools will not fall into the updraft (which would both create drag slowing the updraft and cool the updraft air, making it less unstable) but to one side of the updraft. Secondly, you need low-level helicity, which is essentially the amount of spin in the atmosphere, which will intensify the rotation of the updraft as it is sucked inward towards the tornado via conservation of angular momentum (this is actually one of a few theories on tornado formation, but they are all along these same lines).
Now, there's no real way I can think of to reduce the amount of wind shear, so that leads me to trying to reduce the available energy for the tornado (CAPE). CAPE is fairly simple in theory: it is essentially related to the lapse rate of the atmosphere on a given day: if you lift a parcel from the surface and it is warmer than the surrounding atmosphere (and therefore less dense, and therefore will accelerate upwards), then it has positive CAPE; the bigger the temperature difference and the deeper the vertical distance along which this temperature difference exists, the higher the CAPE. So, if you could find some way to intensely cool the lower-levels of the atmosphere in front of the tornado, or conversely, intensely heat the mid-levels of the atmosphere, this would be a way to kill an intense tornado. However, the energies required for this would be enormous: typical CAPE values for a major tornado outbreak are above 1000 J/kg, which, when you consider that even a typical room contains about 100 kg of air, you're talking many terrajoules of energy...not especially practical. For comparison, the United States uses about 400 terrajoules of energy per minute, and the Hiroshima bomb released about 60 terrajoules total.
However, there are ways to keep a high-CAPE situation from developing. In almost all significant tornado outbreaks, CAPE is able to increase substantially due to a cap in the atmosphere, or a mid-level area of warm, ideally dry air. Early in the day, when parcels at the surface begin to warm, become unstable, and rise, they will hit this area of warmer air, and because a parcel of air which is cooler than its surrounding environment will also be denser (heavier), it will sink back toward the surface without forming a storm. As the day wears on, surface parcels warm even further, until at some point a few get warm enough to accelerate straight through the capping layer to the much cooler air above, tapping into huge amounts of potential energy and forming intense thunderstorms. At the same time, the cap prevents weaker updrafts from creating thunderstorms, almost creating a natural selection allowing the most severe to prevail and hoard all the available energy. On days where this cap does not exist, the surface warms, and storms form early in the day and are widespread and much less intense, since even weak updrafts can survive to create storms. The resulting widespread rain and clouds prevents the surface from heating further during the day, and thus the extremely unstable situation never materializes.
Sorry, this has gotten a lot more technical than I intended to get, but I hope I've been clear enough: In conclusion, if you were looking to deter a tornado, it would be much easier to be pre-emptive about it: find a way to create thunderstorms earlier in the day to release the potential energy before it builds up to a breaking point. Here is where you might consider a nuclear bomb, but then, that's sort of like keeping your house safe from falling trees by setting the forest on fire: you're just making worse problems for more people.-RunningOnBrains(talk) 12:22, 3 May 2011 (UTC)[reply]

Reference desk (science) quasi-semiprotected?

[edit]

I know this should go the the help desk but I can't edit the help desk page neither, this page here at least has an "ask a question" button. As discussed earlier, for some users this page (and the help desk page) are effectively semi-protected. I can't see those edit buttons per section and on the top of the page there is no "Edit" tab but a "View source" instead. This goes away as soon as I manage an edit via the only possibility left, the "ask a question" button. The help desk page doesn't have one, so no way to ask there. The condition reappears as soon as I do "clear private data" in my browser, for example after doing home banking. So what shall I do? An empty question is not possible. Shall I make up a question each time, or shall I refrain from giving answers? 93.132.171.155 (talk) 11:02, 29 April 2011 (UTC)[reply]

Click view source anyway and it should take you to a normal edit page. You can also purging the server cache which should make the section edit links come back. There is something wrong with Wikipedia server cache which is causing these problems, lots of people are getting them. 82.43.89.63 (talk) 11:33, 29 April 2011 (UTC)[reply]

Breathing gas

[edit]

Astronauts and SCUBA divers carry their oxygen in heavy bulky pressurized tanks. Can breathable oxygen be made instead by a reaction between chemicals that can be easily carried at room temperature and pressure? Cuddlyable3 (talk) 12:08, 29 April 2011 (UTC)[reply]

In emergencies, astronauts can use oxygen candles, which are pretty much as you describe. --Tango (talk) 12:24, 29 April 2011 (UTC)[reply]
The closest to what you describe would be an oxygen rebreather which uses a smaller tank than regular scuba and recirculates the oxygen through a chemical 'scrubber' to remove CO2. But this is about as bulky as regular scuba tanks. to produce it in a usable form would require chemical production control, collection, quality control, filtration, compression and usage regulation. Not feasable in a portable unit.190.56.105.199 (talk) 15:23, 29 April 2011 (UTC)[reply]
Note that using another chemical which contains oxygen would increase the weight, as well, since you have the non-oxygen component of those chemicals to carry, in addition to the oxygen. Also, what happens to the remainder ? If it still must be carried after the oxygen is used up, this increases the weight at that point, versus a normal tank.
Another approach is to lower the temperature to where oxygen becomes a liquid at normal pressure. This eliminates the need for heavy tanks, but does require that you have cold temperatures available and a way to heat it back up when using it. This might work well in a space ship, as the side of the ship away from the Sun might get cold enough for liquid storage, while the side pointed toward the Sun could be used to warm it back up. StuRat (talk) 20:18, 29 April 2011 (UTC)[reply]
Room pressure is no use for a scuba diver - what's needed is ambient pressure which increases by approximately one atmosphere for every ten metres of depth. So either your oxygen candle would need to be to cleverly regulated to automatically produce the correct amount as a function of ambient pressure, or produce at least as much as could be required (say seven times and somehow vent the excess. 78.245.228.100 (talk) 21:13, 29 April 2011 (UTC)[reply]
Hmmm, how about mainlining oxygen at high ambient pressure? I'd think a small syringe of oxygen at such pressure would be enough to last a person for some time. Or could you dissolve it in perfluorocarbon? Of course, it would be best to break up the bubbles a bit to reduce the risk of abrupt death... ;) Wnt (talk) 20:28, 30 April 2011 (UTC)[reply]
If you go Scuba diving regularly, you will learn that you can do with a lot fewer breaths and still get adequate oxygen. But this is far from natural - it's one reason why experienced divers can stay under water 2 or 3 times longer than beginners with an equivalent bottle of air. Beginners will use nearly the same volume of gas than they would under normal pressure. --Stephan Schulz (talk) 20:37, 30 April 2011 (UTC)[reply]
I'm not sure why you'd want to dilute the O2 with anything exotic; good old Nitrogen is cheap and will do the trick nicely. To maintain a healthy 0.2 bars O2 partial pressure though, you'd need to compensate by increasing the partial pressure of the diluent, which will increase its narcotic effect. As noted above, the system required to mix the appropriate amounts of oxygen and diluent (half of a rebreather ) is a bulky and expensive piece of equipment - a long way from the original emergency backup system suggested. 78.245.228.100 (talk) 21:02, 30 April 2011 (UTC)[reply]
Nitrogen Narcosis comes to mind in some higher pressure cases. Googlemeister (talk) 19:49, 2 May 2011 (UTC)[reply]

biotechnology

[edit]

what may be the questions asked biotech quiz —Preceding unsigned comment added by Ruma R Sambrekar (talkcontribs) 13:19, 29 April 2011 (UTC)[reply]

What biotech quiz? Plasmic Physics (talk) 13:58, 29 April 2011 (UTC)[reply]
This is not a crystal ball. Please see biotechnology.--Shantavira|feed me 16:11, 29 April 2011 (UTC)[reply]
Yes, but this is a Magic 8-Ball. "Concentrate and ask again" B) Wnt (talk) 21:10, 29 April 2011 (UTC)[reply]
"It is decidedly so." StuRat (talk) 21:44, 29 April 2011 (UTC) [reply]

reverse osmosis

[edit]

why do reverse osmosis membranes clog?After how long do we need to replace the membranes, if we use our machine on a daily basis on a 16hrs every day. —Preceding unsigned comment added by 41.215.125.38 (talk) 16:50, 29 April 2011 (UTC)[reply]

All filters will eventually become clogged, but without more detail it is impossible to say how often it will need replacing. See reverse osmosis and membrane. Replace the membrane when it no longer functions correctly.--Shantavira|feed me 18:05, 29 April 2011 (UTC)[reply]
The stuff that is filtered out accumulates until it clogs the filter. There is a way to automatically clean out a reverse osmosis filter, by reversing the pressure difference and thus the water flow direction. However, you don't want to drain that "bad water" back into the fresh water pipe, so it needs to switch over to drain into the sewer line. This requires additional plumbing and complexity and control electronics, so increases the price substantially. Note that this can extend the life of the filter, but not indefinitely, as deposits will still accumulate which can't be simply rinsed out. In the case of a desalinization reverse osmosis filter, the amount of salt which accumulates is so high that a different process must be used which drains out the salt brine continuously, rather than periodically. StuRat (talk) 20:04, 29 April 2011 (UTC)[reply]

Transit time

[edit]

What is the average transit time for the average adult human for food to go from the mouth to the end of the ileum?--92.28.85.30 (talk) 20:01, 29 April 2011 (UTC)[reply]

Our article Human gastrointestinal tract#Transit time, it seems to suggest a total time of around 35-45 hours. ny156uk (talk) 20:07, 29 April 2011 (UTC)[reply]

The questioner is the banned user "Light current". Thank you for feeding the troll. ←Baseball Bugs What's up, Doc? carrots09:52, 1 May 2011 (UTC)[reply]
Could just be they were trying to ask an otherwise reasonable question. No need to accuse them of some kind of harm for answering a question, unless your comment might prevent them from doing it again in the future, which it wouldn't. Chris M. (talk) 14:45, 3 May 2011 (UTC)[reply]

Irish wildfires

[edit]

I am wondering whats the average rate of wildfires and which one was the most intense and whats the most frequent cause of wildfires in Ireland. --109.76.53.162 (talk) 20:16, 29 April 2011 (UTC)[reply]

The ultimate cause, as everywhere else, is no doubt an accumulation of flammable undergrowth (which can be worsened by humans preventing and stopping small fires which naturally burn it off). The more recent cause is dry weather. The trigger, on the other hand, can be arson, campfires, lightning, etc. Are you asking about the triggers ? StuRat (talk) 20:36, 29 April 2011 (UTC)[reply]
This link ([4]) has stats on European wildfires. From a quick survey, I'd estimate that Ireland sees between 400 and 800 fires per year, and a similar total burned area in hectares, which would suggest individual fires are very small (at least by global standards), maybe 1-2 hectares on average. No idea about the largest on record. The Interior (Talk) 21:31, 29 April 2011 (UTC)[reply]
You're assuming they are all about the same size. That could be many tiny fires and a few huge ones. Of course, how big can a fire get on a medium sized island ? StuRat (talk) 21:42, 29 April 2011 (UTC)[reply]
Up here in Canada, we occasionally get fires in the north larger than European nations like Ireland. If they're far enough away from communities, we just let 'em burn. The Interior (Talk) 21:57, 29 April 2011 (UTC)[reply]

Well a few weeks ago there was a bad Gorse fire on one of the mountains it was close to houses and it seems to me my area wildfires are spawning more frequently. I mostly live in the north Cork area surrounded mainly by the Knockmealdowns and Galtee mountains. We are having an unusually dry and warm spring with very little rain. This started at February after the unusually freezing winter we had. So is it possible that Ireland is seeing an increase of Wildfires. Also I know that farmers around here are known to start wildfires to kill off weeds but with strong winds around they are turning dangerous. Our area also faced the same conditions last year. Also there has not been any lighting or thunderstorms near my area.--109.78.54.208 (talk) 11:43, 30 April 2011 (UTC)[reply]

It could be that the good weather has brought the idiots out, with their discarded cigarettes and barbeques. Happens a lot in the UK. --TammyMoet (talk) 15:56, 30 April 2011 (UTC)[reply]

Ventral Aorta

[edit]

Our article on Nikolay Ivanovich Pirogov currently states: "He completed further studies at Dorpat (now Tartu), receiving a doctorate in 1832 on the ligation of the ventral aorta." I have two questions based on this. We don't have an article on "ventral aorta", and a quick Google search reveals that, although fish have one (and a dorsal aorta into the bargain), mammals only have one aorta, which is divided into various sections. Is "ventral aorta" in the Pirogov article a mistranslation or obsolete term, and, if so, what should it be? Secondly - what would be the point of ligating the aorta? There are quicker ways of killing the patient, and I can't see any other outcome from such a procedure. Tevildo (talk) 20:55, 29 April 2011 (UTC)[reply]

I didn't find anything but Wikipedia's own backwash and a Google nyah-nyah to an ad for a book [5] which may or may not use the term behind a paywall. Likely some others can be found, though who knows if they say anything more than this one line. The definitive test - tracking down an 1832 Russian thesis is probably a real pain - even the old paper archives in good libraries don't generally go back that far; I suppose there's a chance that his alma mater's library stocks a copy of the thesis, under special restrictions.
But if I had to take a guess, I'd guess that he was doing research in some animal with a ventral aorta. After all, even backward Tsarist Russia probably didn't encourage students testing that kind of thing on humans! Wnt (talk) 21:19, 29 April 2011 (UTC)[reply]
That sounds reasonable - doing experiments on fish for one's doctoral thesis is rather more likely than performing fatal operations on a non-existent artery of a human patient, after all. Is there any legitimate means of mentioning this in the article? Tevildo (talk) 11:11, 30 April 2011 (UTC)[reply]
I was going to suggest you ask the original contributor.[6] But it appears that this was associated with a User:ALoan, an admin who seems to have retired shortly after criticizing the Wikipedia:Requests for arbitration/Badlydrawnjeff decision - a political event which elevated BLP over encyclopedism, and marked the moment at which Wikipedia shifted from exponential growth to steady decay. So I don't think that will be a likely option - actually, I'm not even sure where you'd try to ask. But he added at the same time a "reference" that the article was based on a translation of the de.wikipedia article, so maybe there is more information about it there. But I don't see anything about it in the current version. Wnt (talk) 21:43, 29 April 2011 (UTC)[reply]
[citation needed] Nil Einne (talk) 07:36, 1 May 2011 (UTC)[reply]
My guess is that the descending aorta was intended. You would rarely ligate that in a human patient as far as I know (since it shuts off blood flow to the lower part of the body), but it is sometimes done in terminal experimental techniques, such as perfusing the brain with a fixative. Looie496 (talk) 22:00, 29 April 2011 (UTC)[reply]

Just thinking - perhaps it should be Abdominal aorta? "Ventral/Abdominal" is a possible mistranslation, and the Abdominal Aorta page does mention ligation. I suppose that we should track down the original source. * sighs * Tevildo (talk) 21:39, 2 May 2011 (UTC)[reply]

DNA differences

[edit]

A friend of mine heard that two people can have 5% difference in DNA. I thought that was bigger than the difference between humans and chimps. Is this correct? Bubba73 You talkin' to me? 21:56, 29 April 2011 (UTC)[reply]

The usual figure for the difference between two average people's DNA is around one tenth of one percent. Looie496 (talk) 22:01, 29 April 2011 (UTC)[reply]
Perhaps that 5% isn't of all the DNA. For example, there's lots of DNA that doesn't appear to do anything useful, it's just "junk DNA". So, if you don't include that, then the percentage difference between people would be higher, but probably still nowhere near 5%. Maybe if you only used that DNA which varies from humans to our nearest relatives (either extant or extinct), then the variation from one person to another might be closer to 5% of that. StuRat (talk) 22:19, 29 April 2011 (UTC)[reply]
I assume it would be different if you were comparing raw DNA versus comparing genes. Humans have 220 million base pairs in their DNA, but only 20,000 or so genes. --Mr.98 (talk) 22:56, 29 April 2011 (UTC)[reply]
Minor correction - the usual figure for the human genome is ~3 billion base pairs. --- Medical geneticist (talk) 11:16, 30 April 2011 (UTC)[reply]
You're right — I was quoting the length of a single chromosome without realizing it. --Mr.98 (talk) 13:40, 30 April 2011 (UTC)[reply]
I think my friend heard the 5% figure on NPR Science Friday today. Bubba73 You talkin' to me? 22:20, 29 April 2011 (UTC)[reply]
Chimpanzees have about "94 percent the same genes", looking at paralogues that aren't shared between the two (see the original reference from that article[7]). There are a variety of different statistics that can be put forward, depending on which parts of the DNA you leave out and how you count a match (similarity vs. true orthology). I'd watch how the noncoding is counted especially when comparing comparisons. Wnt (talk) 22:22, 29 April 2011 (UTC)[reply]
If someone has access right now, [8] should give good measures of the overall difference in the most divergent known humans. Wnt (talk) 22:32, 29 April 2011 (UTC)[reply]
(Snaking their supplemental data, I see they use a mutation rate of 2e-8 per generation, generation time of 20 years, and assume Neanderthals diverged <1 million years ago. Which seems to put an upper limit of 0.2% on the neutral divergence at noncoding positions. However, their alignments have edges, for various technical reasons but also due to variations in what is included in even modern humans, so I haven't really proved anything as of yet. =( Wnt (talk) 22:45, 29 April 2011 (UTC)[reply]
5% is an order of magnitude too high. See human genetic variation. Current estimates are that the nucleotide diversity between humans is about 0.1% (1 difference per 1000 bases) -- the figure Louie gave -- and that copy number variation creates an additional 0.4% difference between individuals. Perhaps your friend heard "point 5 percent". --- Medical geneticist (talk) 23:07, 29 April 2011 (UTC)[reply]

Actually it was on NPR "Tech Nation" and I'm not sure if it was today. Bubba73 You talkin' to me? 03:50, 30 April 2011 (UTC)[reply]

Wait, wait. If the average human gene length is 3000 base pairs [9], and typical rate of nucleotide differences is 0.1% (1/1000), doesn't that imply that any two people are unlikely to share the same allele? It seems to suggest that on average we will have 3 point differences per gene. If that's true, then it would seem to suggest that if you compare any two people you will find that most of their alleles are different. That seems qualitatively very different that saying we are 99.9% the same. (Presumably there is some additional accounting for the rate of differences in exons rather than non-coding DNA, as well as the fact that some nucleotide replacements don't impact the expressed protein.) Dragons flight (talk) 04:36, 30 April 2011 (UTC)[reply]

You are assuming that the 1:1000 rate of variation represents changes that are unique to each individual. On the contrary, most (90%) of the nucleotide variation is accounted for by "single nucleotide polymorphisms" (SNPs) that are found in different proportions of the population (often defined as a "minor allele frequency" of > 5%). For example, at a certain position in the genome 70% of us have an "A" and 30% of us have a "T". Or 40% have an "A", 35% have a "G", and 25% have a "C". These common SNPs are generally organized in blocks called haplotypes such that people tend to share the same groups of SNPs across variably long stretches (i.e. not all possible combinations of SNPs are observed). This results in genes having defined "versions" (alleles) in the population. There are also polymorphisms that are present at lower frequency (0.1% - 5%) that we will be able to detect once tens of thousands of human genomes have been sequenced. It's basically going to look like a continuum from a few (~n x 103) rare/unique differences, a handful (~n x 104 - 105) of low frequency variants, and a large amount (3-4 million) common variants per person. --- Medical geneticist (talk) 11:14, 30 April 2011 (UTC)[reply]
Yes, I know that differences will cluster into specific alleles. However, that doesn't really remove the issue. If 1/1000 nucleotides is going to be different between any two people, then one would have to expect that a large number of the genes will have different alleles in order to accomplish that. So, what fraction of my alleles are likely to be different than yours? Looking at a (relatively) simple system like ABO blood type, the most common allele (O) occurs in only about 50% of people (at either homologous chromosome). The effect is that if you choose two people at random, they are more likely to have at least one different blood type allele than they are to have both be the same. Less simple phenotypes, such as hair color, eye color, skin tone, height, etc., would also seem to suggest that if you pick two humans at random then they are likely to have many alleles that are different. Given a gene length of a few thousand base pairs, and a difference rate of 1/1000 base pair, it would generally suggest that on average most genes should be like this (assuming the differences are distributed roughly uniformly through the genome). In other words, that the typical gene would have enough major allele variations to ensure that in a comparison of any two random people that there are many, many different alleles expressed. Dragons flight (talk) 18:39, 30 April 2011 (UTC)[reply]
Perhaps we don't share many alleles with a randomly selected other person, but the different alleles usually aren't functionally different. To use your example, there are in fact not 3 alleles determining ABO blood group. ABO (gene) lists 6 and says there are more that aren't as common. There is apparently not much, if any, difference in the results of the different versions, which is why we don't usually worry about them. --Tango (talk) 23:05, 30 April 2011 (UTC)[reply]