Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Talk:Reciprocity (photography)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Fast Shutter Speeds

[edit]

Reciprocity failure also exists at the other end of the spectrum -- when shutter speeds are faster than 1/1000 of a second. In that case the aperature has to be opened wider to compensate for the short exposure time.

Reciprocity failure is supposedly useful in astrophotography. Can someone elucidate?

I think the issue is the same as in "normal" long exposition photography. But as I agree that your question will probabbly arrise again from readers, something short should really be added. --Bertoche (talk) 04:20, 13 October 2011 (UTC)[reply]

EV=5 Example

[edit]

Shouldn't the exposure time be 4.5 seconds in the example? I realize that it is nice to round off, but in this case, the round-off error is almost 10% of the exposure time. I'll change it in a few days, unless there are objections. (neffk <neffk ta ieee tod org> 9 April 2006)

Reciprocity model

[edit]

The section says:

In the late 1980s, Shutterbug Magazine (see refs.) published tables of reciprocity factors for a wide variety of common commercial, color print films (Kodak, Fuji, ...) the type of film with the worst low light reciprocity failures. The films were the most common speeds, something like 50-1000 ASA/ISO, exposed for periods of seconds to hours in very low light. Generally the reciprocity factors, independent of brand and speed, fell along a rising curve that can be described as:
Best exposure time in seconds = 2.5 (Metered exposure time in seconds) 1.5
which can be used as a rule of thumb for very low light exposures of standard color print film, metered for a second or more. A metered one second really required 2.5 to expose the film accurately, and a metered 60 seconds really required nearly 20 minutes.

Now, this makes no sense at all. It has a reciprocity factor of 2.5 already at 1 second, and getting worse. I've never seen film this bad. Can someone provide the exact reference, and maybe more details from it or a copy of it? Otherwise, we should take this out. Dicklyon 03:29, 9 April 2007 (UTC)[reply]

I remember the article and dataset, and have used it for many years, for exposures outside, at night, from 20 minutes to 4-5 hours with good results. It's easy (and conclusive) to test. Get a good camera that can meter down to EV2, outside on a moonless night with distant street lights. The metered exposure for ISO 200, at f22, is one minute, but the model says 20 minutes. Shoot both. The first prints black (or granulated gray) with a few small points of light. The second prints like an other-worldly daylight shot with white globes for the lights. -74.242.254.20 (talk) 14:10, 24 February 2009 (UTC)[reply]

If we can have a source, I don't mind citing it. But it does seem a bit incredible to me that film is this bad at 1 second usually, and that such a power law ends up being adequate over a range of films and times. Of course, it also needs to be cut off some place, such as at the point where it gives an answer less than the metered exposure. Dicklyon (talk) 01:42, 25 February 2009 (UTC)[reply]
It's not incredible at all for a product to perform badly outside the use for which it's designed (~EV5 and up, ~1 second and faster). The utility of a single approximate model, no doubt, results from the great latitude color print film has in processing and printing. -65.246.126.130 (talk) 16:54, 27 February 2009 (UTC)[reply]
Right, but most sources show that 1 stop of correction kicks in at around 1 to 10 seconds; your formula has a bit over 1 stop already at 1 second. Anyway, I have a number of sources with tables for particular films; but no formulae. Dicklyon (talk) 05:15, 28 February 2009 (UTC)[reply]
The model's utility no doubt derives from its simplicity and always being within a couple EVs of the exact exposure (OK for color print film) for all brands and ISO ratings of film. The old Shutterbug article should clinch it. -65.246.126.130 (talk) 19:33, 3 March 2009 (UTC)[reply]
I see lots of variability in other data. But if you can produce the article, we can cite it. Dicklyon (talk) 04:42, 4 March 2009 (UTC)[reply]

Digital

[edit]

I think it should be specified whether Reciprocity is an issue with digital camera sensors. The section on astrophotography implies that it isn't, but doesn't say outright.--Tiberius47 06:34, 15 August 2007 (UTC)[reply]

Electronic sensors have a different kind of reciprocity failure. In silver grains, there's a "leakage" toward dark; that is, grains need several photons to turn, but after getting one photon and sitting a while they can forget, and leak back to an unexposed state. Photodiodes go the other way; they leak toward light. So with very long exposures, instead of less sensitivity you get more "fog" and "noise", which is sort of equivalently bad. But it has been possible to make some really excellent low-leakage CCDs, especially when cooled. That's the current best type of sensor for very long exposures. I wish had a good source on all this. Dicklyon 06:48, 15 August 2007 (UTC)[reply]

Source references

[edit]
  • "The reciprocity law, enunciated by Bunsen and Roscoe in 1862, states that for any given photosensitive material, the photochemical effects are directly proportional to the incident light energy, i.e. the product of the illuminance and the duration of the exposure. For a photographic emulsion this means that the same density will be obtained if either illuminance or exposure duration is varied, provided that the other factor is varied in such a way as to keep the product H in the equation H = Et (encountered earlier) constant.
Abney first drew attention to the fact that the photographic effect depends on the actual values of H and t, and not solely on their product. This so-called reciprocity failure arises because the effect of exposure on a photographic material depends on the rate at which the energy is supplied. All emulsions exhibit reciprocity failure to some extent, but it is usually serious only at very high or low levels of illumination, and for much general photography the reciprocity law can be considered to hold. In the sensitometric laboratory, however, the effects of reciprocity failure cannot be ignored, nor in certain practical applications of photography."
Geoffrey G. Atteridge (2000). "Sensitometry". In Ralph E. Jacobson, Sidney F. Ray, Geoffrey G. Atteridge, and Norman R. Axford (ed.). The Manual of Photography: Photographic and Digital Imaging (9th ed.). Oxford: Focal Press. p. 238. ISBN 0-240-51574-9.{{cite book}}: CS1 maint: multiple names: editors list (link)
Please use this as you see fit. Redbobblehat (talk) 01:59, 27 February 2009 (UTC)[reply]
I started a history section, citing that and others. It could be cited in more places in the article, too. Dicklyon (talk) 06:10, 27 February 2009 (UTC)[reply]
  • "Law of Reciprocity (1862) Physics Discovered by the German physicist Robert Wilhelm Eberhard Bunsen (1811–99) and the British chemist Henry Enfield Roscoe (1833–1915), this law states that when all other conditions are kept constant, the exposure time needed to give a certain photographic density is inversely proportional to the intensity of the radiation. The law does not hold at high and low light intensities, when reciprocity failure is said to occur. (J Thewlis, ed., Encyclopaedic Dictionary of Physics 1962)" [1] --Redbobblehat (talk) 17:17, 13 March 2009 (UTC)[reply]
  • "The so-called law of reciprocity states that sources of light of different intensity I produce an equal degree of blackening in their photographic images under different exposures t if the product I x t has the same value in different cases."
K. Schwarzschild "On The Deviations From The Law of Reciprocity For Bromide Of Silver Gelatine" The Astrophysical Journal vol.11 (1900) p.89 [2] --Redbobblehat (talk) 17:17, 13 March 2009 (UTC)[reply]
The latter is cited already; say more about it if you like. The former is trivial; dictionaries are not generally useful sources when better ones are available. Dicklyon (talk) 21:13, 13 March 2009 (UTC)[reply]

Reciprocity law = H = Et ?

[edit]

I'm not very clear as to whether the reciprocity law defines photometric exposure itself (as intensity x duration) or a film's responsivity to exposure ... or both ? Would it be correct to say:

  • In this context "reciprocal relationship" and "reciprocity" means 'mutual benefit', a 'direct relationship', or 'direct proportionality', and not its complete opposite evil twin: the "reciprocal of" x = 1/x, which means 'at the expense of', an 'inverse relationship', or 'inverse proportionality' ... unless I'm missing something ?
  • By extension, aperture and shutter settings are "reciprocal exposure settings", and the EV system is simply a convenient scale of "reciprocal-exposure increments" ... if EV = log2(N2 * t), the EV scale wouldn't be upside down ;-)?
  • However : "Bunsen-Roscoe law : In two photochemical reactions, e.g., the darkening of a photographic plate or film, if the product of the intensity of illumination and the time of exposure are equal, the quantities of chemical material undergoing change will be equal; the retina for short periods of exposure obeys this law. Synonyms: reciprocity law, Roscoe-Bunsen law." [5]. This would also appear to be the accepted meaning in biology [6].
  • Also, according to Atteridge (above), the reciprocity law states (effectively) that the film's density response (D) is directly proportional to the exposure (H = Et). So where γ represents the film's gamma or linear (contrast) slope, the reciprocity law would be :
D = log10(Et)γ
  • So if reciprocity failure occurs when the film's density response is not directly proportional to the photometric exposure, the "toe" and "shoulder" of an HD-curve (as notable deviations from the linear gamma slope) represent a kind of reciprocity failure ? (see also [7])
  • Reciprocity failure is peculiar to photochemical film, etc, but is not an issue for photoelectric sensors ?

--Redbobblehat (talk) 03:45, 28 February 2009 (UTC)[reply]


It's not that complicated. Exposure is H = Et. Reciprocity says all you need to know is H. But it's not quite true. Same for electronic sensors and eyes: it works to first order, but not at the extremes. Dicklyon (talk) 04:50, 28 February 2009 (UTC)[reply]
Indeed. I'm suggesting that a more accurate and precise explanation of the reciprocity law and its application in photography would help clear up a lot of the confusing complexity and ambiguity in this and other articles that are built upon it; eg. exposure value, exposure (photography), sensitometry, etc. I liked this one:
  • Law of Reciprocity (1862) Physics Discovered by the German physicist Robert Wilhelm Eberhard Bunsen (1811–99) and the British chemist Henry Enfield Roscoe (1833–1915), this law states that when all other conditions are kept constant, the exposure time needed to give a certain photographic density is inversely proportional to the intensity of the radiation. The law does not hold at high and low light intensities, when reciprocity failure is said to occur. (J Thewlis, ed., Encyclopaedic Dictionary of Physics 1962) [8]
IMO the repciprocity law is, and always has been, about the photosensitivity of materials. Conflating this with some contrived and flimsy redefinition of exposure (eg "effective exposure" H') is both inaccurate and confusing. Illuminance is defined as E = H/t, because both H and t are measurable quantities; illuminance cannot be empirically measured independently of time; it can only be inferred mathematically. This means that by definition exposure H = Et, so it cannot be subject to reciprocity failure or schwarzschild's exponent, and is pretty much irrelevant to this article. (Optical) Density D is the accepted measure of photographic response and is clearly defined by Hurter & Driffield as log10(1/transmittance); again, a measurable quantity ("blackening"). The whole point of HD curves is to show the irregular relationship between D and Et for a specific photosensitive material where t is effectively constant (ie exposure duration is the same for all parts of the image). The whole point of Schwarzchild's law is to quantify the non-linear relationship between D and Et for a specific photosensitive material where E is effectively constant.
The "general rule" for calculating the photographic exposure required under "normal circumstances" still assumes that D ∝ Et (specifically D = log10(Et)); that the tone rendered in the photograph is directly proportional to the exposure intensity and duration. In practice, deviation from this rule is attributed to the sensitometric characteristics of the particular sensor. Silver halide emulsions exhibit significant reciprocity deviations, whereas CCD and CMOS sensors do not (increased "thermal" noise is not directly linked to reciprocity failure, but it is significant for electronic sensors during long exposures and other situations).
A camera's traditional exposure controls, namely aperture size and shutter period, allow independent control over E and t respectively. This arrangement provides the (manual) flexibility necessary in circumstances where the reciprocity law fails. The exposure value system, and most auto-exposure computers, are based entirely on the assumption that the reciprocity law holds, and are therefore only useful under "normal circumstances". Luckily, manufacturers of modern photographic print film have been able to push these "normal circumstances" to safely accommodate exposure times ranging from 1/4000 to 1 second (or more), and a useful exposure range of about 7 stops (or more).
IMO, if the articles follow this "basic rule and known exceptions" structure, explanations can be greatly simplified and the correct terminology will fall into place, be more consistent and seem more self-explanatory. --Redbobblehat (talk) 16:29, 7 March 2009 (UTC)[reply]
I think you're confused, but make us a proposal. What would you write? (And don't include your two confused uses of D; the D in the Schwarzschild formula can't also be density, and no source has presented it as such; it's the quantity, analogous to H where reciprocity holds, that leads nonlinearly to density). Dicklyon (talk) 16:39, 7 March 2009 (UTC)[reply]
What do you think Schwarzschild means by "the same degree of blackening" ? You may find his paper on the subject worth reading: K. Schwarzschild "On The Deviations From The Law of Reciprocity For Bromide Of Silver Gelatine" The Astrophysical Journal vol.11 (1900) p.89 [9] --Redbobblehat (talk) 11:50, 8 March 2009 (UTC)[reply]
Exactly what he says. If in the reciprocal region, a given exposure H leads to a certain degree of blackening or density, then in the low-intensity region, if his formula gives an H' ("effective exposure") equal to that original H, it will lead to the same degree of blackening, or density. If you write the HD curve as a function D(logH), then in the reciprocity failure region, you'll have D = f(logH'). The H' is not D, but is the "effective stimulus" that leads to D through the characteristic curve f; from the point of view of that function, it has the same units and meaning as the H in the reciprocal region, and is logically called the "effective exposure". Dicklyon (talk) 23:12, 8 March 2009 (UTC)[reply]
No. Your "effective exposure" would not be H' but Hp = Etp ... I don't think you'll get a nobel prize for that one ;-). Bunsen, Roscoe, Hurter, Driffield, Abney and Schwarzchild were all trying to predict photochemical "blackening", which we now know as optical density D. --Redbobblehat (talk) 03:19, 9 March 2009 (UTC)[reply]
Your formula Hp = Etp is nonsensical and your logic impenetrable. But you're right about one thing: I'm not expecting to win any prizes for trying to educate you. Dicklyon (talk) 06:53, 9 March 2009 (UTC)[reply]
Dick, it's not my logic. It's the logic devised by a series of eminent physicists and chemists that has been adopted by the photographic industry. It's really very simple when you apply it to the phenomena it was intended to account for. In practical photography, reciprocity failure is a feature of photographic film (I've been unable to find any indication that photoelectric sensors might exhibit reciprocity failure, and the general consensus seems to be that it doesn't). So if we're just dealing with film, I can see no need to go beyond "optical density" for our definition of photochemical "blackening" (otherwise see actinism). You seem to be overly attached to using H (as opposed to E and t); trying to shoe-horn it in wherever you can, regardless of whether it actually helps clarify or explain the issues at hand. I'm reminded of the saying: "to a man with a new hammer, every problem begins to look like a nail". --Redbobblehat (talk) 14:41, 9 March 2009 (UTC)[reply]
R, it's not as you say. The "eminent physicists and chemists" have got it right, and you've got it wrong. Consider the domain where reciprocity applies (need to understand this before consider how to modify it for reciprocity failure). Would you write D = H = Et? No, of course not; density D is not equal to exposure H, but it's a function of H. Now in Schwarzschild's law, instead of H you have to use the modification of Et, which he proposed as Etp, in place of H as the argument of that function that maps to D. It is fair (if not conventional) to refer to it as "effective exposure", since it's a quantity that has the same effect on density as exposure has in the reciprocal region. To equate it to D is dimensionally and logically nonsensical, and is not related to words of Schwarzschild that you quote. Now, when you look at digital, you can't rely on "blackening" or "density" as the result to compare, so you need something functionally equivalent, which is a signal-to-noise ratio based response criterion; as I've said, it's not often looked at that way, and when it is it is sometimes said to not be "reciprocity failure", but that's just a question of what you want to call it. The point is that with digital as well as with film, you can't keep going to half the illuminance and twice the time and still get the same good detailed image; thus reciprocity fails at long enough exposures. There's no question about that, just about what to call it. Dicklyon (talk) 16:48, 12 March 2009 (UTC)[reply]
The sloppy language in the literature doesn't help, unfortunately. For example, this book says "the effect of a photographic exposure E" when it means "the effect E of a photographic exposure", since there's no way the formula for E can be interpreted as an exposure per se; the for formula is for the "effect", in terms of "effective exposure", as should be clear from the form of it, and in light of my explanation above. This book is more clear in referring to "the effect". Dicklyon (talk) 22:48, 12 March 2009 (UTC)[reply]
D, I agree with so much of what you are saying - including that we disagree simply on what label to use for "it" ... IMO "it" should not be "exposure" (in any form) because we need to make a clear distinction between stimulus and response; "exposure" is obviously the stimulus, and we need a label for the response ... I agree that optical density is a bit awkward to work around, but it requires much less explanation than "actinic energy" and it is scientifically and historically accurate. In the context of visible light photography, BRL can be written D = log10(Et) and Schwarzschild's law can be written D = log10(Etp). I keep thinking that maybe opacity would be simpler, but O = Et and O = Etp is less recognisable ... or maybe we could just say something like R = (a non-specific linear scale of) "photoresponse" or "photoreaction"[10] ? --Redbobblehat (talk) 02:15, 13 March 2009 (UTC)[reply]
No, no, no! The formula D = log10(Etp) make no sense. The expression log10(Etp) is the abscissa, the independent variable, of the HD curve. D is the ordinate, the dependent variable. They're related by the shape of the curve; the same curve, if we can believe Schwarzschild, as the function that relates H to D in the reciprocity region, some function we can call f: D = f(log10(H)). In this relationship, exposure H places the same role as Etp; it is in that sense that Etp can be called an "effective exposure". Understand, and then you can work on what to call it. In this reciprocity failure region, it's not all clear that it makes any sense at all to refer to the product Et as the stimulus; it's certainly not what's sometimes called the effective stimulus; it is still the exposure, but this kind of long-time exposure at low illuminance is not an effective stimulus. Dicklyon (talk) 03:53, 13 March 2009 (UTC)[reply]
Dick I do understand what you are saying, but I can see no evidence for your interpretation in the relevant sources. The reciprocity law, and Schwarzschild's modification of it, describe the (quantified) "blackening" reaction to the (quantified) "exposure" stimulus. They do not describe "some other type of exposure" as a function of "the accepted definition of photometric exposure". --Redbobblehat (talk) 15:09, 13 March 2009 (UTC)[reply]
R, the only reason you don't see the evidence for my interpretation is that you don't seem able to understand the relationships between sentences and equations. The sources don't say that the formula evaluates to a "blackening" or "density"; rather that equal values of the formula lead to equal amounts of darkening or density. The value (of the independent or abscissa values) is related to the density via a functional relationship, a characteristic curve. All the sources are clear on this, so please study them some more. Dicklyon (talk) 15:44, 13 March 2009 (UTC)[reply]
Ah. I think I see the problem: Schwarzschild makes no mention of the HD curve, but if we use (conflate) "density", etc in our 'modern formulation' we can't ignore the HD curve factor; and that opens up a whole can of worms. Using H' = Etp is a neat way of getting around this, but I'm uncomfortable about the potential conflict with the definition of photometric exposure which is a measure of total incident energy and therefore independent of the local responsivity characteristics, unlike "total absorbed energy" or "total reactive energy" ... ? --Redbobblehat (talk) 17:59, 14 March 2009 (UTC)[reply]
IMO defining H' as "effective exposure" is not much better than "blackening" or "density"; they all suggest the "total effect", which would depend on the HD curve and spectral sensitivity factors as well as the p coefficient ? The main difference between the Schwarzchild effect and HD curve, etc is that you can compensate for the Schwarzchild effect by modifying your exposure settings. So for the purposes of calculating exposure settings, we could call H' the "required exposure" or even the "optimal exposure" ... ? --Redbobblehat (talk) 17:59, 14 March 2009 (UTC)[reply]
When Schwarzschild (and others say) "that leads to the same blackening", they mean that the formula value is in the same relationship to blackening as the H is in the HD curve. The formula gives an "effective H" with respect to that curve. Yes, it is a measurement of total effect, or more precisely of the effective stimulus, which in this region is not the same as the total incident, absorbed, or reactive energy, all of which are proportional to just plain H. If you're not comfortable with calling it an "effective exposure" for some reason, find something else to call it, but don't confuse it with blackening, opacity, or density. But thinking of H' as the "required exposure" is exactly backwards. You need to decide what effective exposure you want, in terms of the usual HD curve, interpret that as an H', and work backwards to find an E and t that will give it to you. The resulting H = Et will be greater than the H' you want to end up with. That's how you would use the formula in practice. For example, if you picked an f-number and your meter implies you need a 10-second exposure, you'll have 10 = t^p (some sources call this the effective exposure time); solve for the actual exposure time t, which will be greater than 10, depending on p. Dicklyon (talk) 19:53, 14 March 2009 (UTC)[reply]
I changed the notation to use the symbols and terminology from a source; no more "effective exposure", but "effect of the exposure" instead. For this to make sense, I had to interpret the source in the way the make sense, associating "E" in "effect of the exposure E" with "effect of the exposure", not with with just "exposure", as the formula makes clear (this is, E is not exposure, but is what I had called "effective exposure"). Better? Dicklyon (talk) 00:54, 15 March 2009 (UTC)[reply]
Yes, definitely better. Certainly most sources seem to use It anyway. Conveniently, I can be Illuminance or Irradiance or the traditional (but now frowned upon) Intensity, depending on context - which seems fine to me. E for Effect does make sense, but also evokes Exposure or (famously) Energy ... perhaps we could use R for Reaction to ... or Response to exposure instead ? --Redbobblehat (talk) 15:28, 15 March 2009 (UTC)[reply]
Also, do you think the formula might be more 'easily digestible' if presented in the form : "exposure time required = ..."; eg. t = E / Ip or something like that ? So it's a quick fix for readers who want a practical answer, and the mathematically curious can rearrange it at their leisure ... ? --Redbobblehat (talk) 15:28, 15 March 2009 (UTC)[reply]
I think we should stick close to sources. The Schwarzschild formula is not one that's used anyway, being not at all accurate or principled; something like it was there with the t^2.5 thing, but I took it out as I couldn't find a source. Dicklyon (talk) 15:34, 15 March 2009 (UTC)[reply]
Yeah I know what you mean. Perhaps we should just delete the formula and leave the description (of the schwarzschild coefficient) as a verbal one ? ... Kron's model looks far more interesting anyway ;-) --Redbobblehat (talk) 18:41, 15 March 2009 (UTC)[reply]
Why would you want to delete it? Just because it's not particularly useful or accurate doesn't mean it's not notable or worth reporting. In limited domains it can be used (any curve fit is only as good as the data in the region it's fitted to). Dicklyon (talk) 19:01, 15 March 2009 (UTC)[reply]
"because it's not particularly useful or accurate" lol :-) ... it distracts from / dilutes the better parts of the article. I do mean just delete the math formula, keep the section mostly as a historic waypoint. IMO it's not relevant enough to warrant a full treatment in this article, and the formula seems to be more trouble than it's worth. The quote gives the essence of it: for comparison we could add the sentence by Schwarzchild describing the "so-called reciprocity law". All he did was add the coefficient, and AFAIK the main implications of that are already stated. I would prefer to put attention into the reciprocity law itself ... and clean up this talk page!! ;-) --Redbobblehat (talk) 01:07, 16 March 2009 (UTC)[reply]

Cleaning up confusion

[edit]

Red, if you're trying to learn about reciprocity from this article, I can see why you're confused. I've tried to clarify. The reciprocity law should not be confused with the definition of exposure, which it very much was. Reciprocity says same exposure leads to same effect. That's all. Dicklyon (talk) 20:29, 14 March 2009 (UTC)[reply]

Dick, thank you for your sympathy, patience and continued efforts to help clear this up. :-) --Redbobblehat (talk) 21:22, 14 March 2009 (UTC)[reply]

Color film model -- problem with new section

[edit]

Anon editor has added this content:

Reciprocity failure compensation model for color print films

Published corrections for reciprocity failure are more severe for color print films than for slide film. There are various models for this, and Kodak has published a table of exposure time compensations from a model[1][2] for correct time in seconds, tc, expressible as a power law of metered time, 0.2 sec.< tm < 70 sec., i.e., tc = 1.6 * tm ^1.3, the right side of which can be directly computed by cutting and pasting into a Google search bar.

  1. ^ Kodak Color Films and Papers for Professionals, Kodak Publication No. E-77, Eastman Kodak Company, Rochester, N.Y., 1986.
  2. ^ Jerry Litynski, "Night Photography and Reciprocity Failure" Dec 30, 2000

I've taken out bits of it a few times, based mostly on WP:RS. There's no need to coach the reader about using Google as a calculator, and it's unclear what is to be verified that way anyway. The Litynski forum reference doesn't suit WP:RS, and doesn't seem to contain anything resembling the stated power-law formula. So I've asked for an explanation here, which should be more efficient than edit warring about it. Dicklyon (talk) 05:22, 13 April 2009 (UTC)[reply]

tm tc factor
0.2 0.197 1.0
(Kodak) ... 1.4
2 3.940 2.0(-)
(Kodak) ... 2.0
6 16.433 2.7
(Kodak) ... 2.8
15 54.080 3.6
(Kodak) ... 4.0
35 162.708 4.6
(Kodak) ... 5.6
70 400.634 5.7

Using the Google link provided, this looks verified to me. -65.246.126.130 (talk) 17:17, 13 April 2009 (UTC)[reply]

What are you trying to tell us? Did you fit the parameters of the power-law model yourself, or is the model given by Kodak? Does the Kodak book even say they used a model? And what exactly is your table, with its odd number of lines, trying to convey? Dicklyon (talk) 18:37, 13 April 2009 (UTC)[reply]

I took the given formula, put it into the Google box with the reported values for tm (0.2,...,70), calculated the corresponding tc's, and BY GOLLY they all bracket the values reported by Kodak as suitable multipliers for those ranges of tm. -65.246.126.130 (talk) 19:06, 13 April 2009 (UTC)[reply]

But where did you get the formula? Dicklyon (talk) 19:46, 13 April 2009 (UTC)[reply]

Here and if the reliance on the E-77 document above and the online list of part of it doesn't suffice, then other parts of this article, which are backed up with Google book searches, too, must go. It suffices for me, given its internal and apparent external consistencies (not to mention the new level of immediate utility added to this article.) -65.246.126.130 (talk) 21:10, 13 April 2009 (UTC)[reply]

I asked where did you get it, not where did you put it. Is the power-law formula in Kodak's E-77 book? I'll know soon, as I have a copy on order. Or is it from the forum? I don't see it there. Or did you make it up? Dicklyon (talk) 21:53, 13 April 2009 (UTC)[reply]
Continually putting it back, instead of saying if it comes from a source, isn't going to work, even if you do it from multiple IP addresses. Please stop. Dicklyon (talk) 01:01, 14 April 2009 (UTC)[reply]
Also, the forum you cite is not even clear on what the numbers are for; it doesn't say what film, even whether it's for negative or slide film. It says:
Jerry Litynski [Subscriber] , Dec 30, 2000; 02:25 a.m.
You might consider a notebook to track your exposures.
Kodak Company's recommended exposure table of reciprocity:
If the calculated exposure time Multiply the is between: exposure by:
1/5 - 2 seconds x 1.4
2 - 6 seconds x 2.0
6 - 15 seconds x 2.8
15 - 35 seconds x 4.0
35 - 70 seconds x 5.6
Example: a 40 second exposure x 5.6 would actually require: 3 minutes 44 seconds
If you use the table above, your meter (in the camera) should provide a ballpark exposure. Then bracket 10 seconds more and less. Once the film is processed, you can check the negatives. (If you are shooting slides -- you do not have the luxury of plus/minus exposures.)
And it doesn't say what publication it refers to. Do you have the Kodak book you cite? Or found out about it another way? Dicklyon (talk) 04:05, 14 April 2009 (UTC)[reply]

The one I found was in a forum that you dismissed as "some guy's formula". Still, it seems derived from careful experimentation, the entire procedure for which was laid out in the forum. I reported the general form of the correction, as several versions of it were on the forum. The other was from someone else who appears to back it up at least as well as in the following "quantum" section. Assuming it's valid (it's consistent with everything else I've seen) I'd say use it, unless disproven. From the lag, it's got to be color print film. I myself don't have E-77 (looking around though). -65.246.126.130 (talk) 19:39, 14 April 2009 (UTC)[reply]

The forum page you linked led you to conclude that "In practice, a correction to add to metered time, tm, is found to be of the form a(tm)^b, where 0<a<1 and b>1.," yet all that's in the source is a couple of particular pairs of numbers that fit a particular set of three points. That's hardly enough to draw a conclusion about a general form, and there's no evidence that "in practice" something was found. And it's a forum, not WP:RS. The formula is clearly your original research, based on inspiration from a forum page. It may be a great formula, but we can hardly conclude anything from the few data points available, and even if we could, there's nothing verifiable to say (per the definition in WP:V). So please stop adding it, and get to work on finding a good source instead. And if you have a problem with other parts of the article, by all means do articulate those. Dicklyon (talk) 23:10, 14 April 2009 (UTC)[reply]
Also, that power-law model of extra time to add made at least some plausible sense; your latest model, correct time as a power-law function of metered time, is nonsense, as it implies correct time less than metered time for exposures shorter than about 0.1 second. A plot of your model against the Kodak data makes it obvious that it's a very poor fit. Dicklyon (talk) 23:23, 14 April 2009 (UTC)[reply]
I was thinking that 74. and 65. were the same anon IP editor, but looks like not. 74 says in latest edit summary "You two are wasting time. I have E-77 and the 2000 link is accurate. The power law accurately represents it over its domain of definition." It would be nice if he would come here and tell us what's in the E-77 and what he means by the rest. Sounds like defense of orignal research, which is not appropriate in wikipedia. I'm going to take it out again... Dicklyon (talk) 02:16, 15 April 2009 (UTC)[reply]

It's not research if it's a simple, accurate summary, which I verified in the table above. Removing it is like removing a foreign language summary because you're not sure of it. The only issue, as I see it, is whether "1/5 - 2 seconds x 1.4 - - - 35 - 70 seconds x 5.6" is reflected in E-77. I'm willing to trust that, for now. -65.246.126.130 (talk) 17:48, 15 April 2009 (UTC)[reply]

The issue is whether there's any mention of a model in E-77; it looks like the model was something that the guy on the forum made up, and that's not a reliable source; lacking any source for a model, we have no reason to introduce one here. You could list the table of data if you like (I'll verify in E-77 when it arrives). Dicklyon (talk) 18:41, 15 April 2009 (UTC)[reply]

If E-77 contains (even if only in effect) the table on the forum, the model is implicit. The source for the model is "Kodak" mentioned on the forum and stated by the guy putting both links in the section. E-77 doesn't have to mention a "model", if it models the relationship between tc and tm in the manner shown on the forum. In that case, the equation is a summary. -65.246.126.130 (talk) 21:58, 15 April 2009 (UTC)[reply]

If the model is "implicit" in the sense you seem to mean, it should stay that way. It is not up to us editors to make models for sourced data, nor to quote the models of random guys on forums. It would be much better to show the data than to put an unsourced model of the data. Please review WP:V and WP:RS. Dicklyon (talk) 23:44, 15 April 2009 (UTC)[reply]

Third opinion

[edit]
Kodak E-77 data and some power-law models (straight lines) and a better model (curve). A correct model should remain non-negative (in stops of extra exposure) at shorter metered times.

A WP:3O has been requested here: Wikipedia:Third_opinion#Active_disagreements. 23:49, 15 April 2009 (UTC)

"Can an inferred model that fits published data be considered verifiable?" It depends if [1]the form of the model can be made to fit the data to an acceptable degree (a valid model) and [2]the data is sourced. If it fits (allowing for built-in inaccuracies or "noise" in the data) the inference is valid, but only over the range of the data. It is like a restatement or summary of an event or position that "fits" the facts. It is an inference that is valid to the extent it does, but only over the scope of the facts, which if sourced, makes the inference verified. -Rotwechsel (talk) 04:03, 16 April 2009 (UTC)[reply]

Is there any policy backing up that strange opinion? What if it can be shown that a different model correctly extrapolates outside the range of the data, and fits better? Are we to then have fights about whose model to include, based on how well they fit? See figure. And why is your very first wikipedia edit a third-opinion response? Can I assume that you are in fact not a different editor, but rather one of the anon IP editors involved? Dicklyon (talk) 04:48, 16 April 2009 (UTC)[reply]
Hello, I am responding to the request for a third opinion. Upfront, I am not versed in photography, but my understanding is that there is a print book from Kodak that is supposed to list a table of factors for correct exposure times for a set of metered times. Assuming that this table is in the text, but that the table is all the data that Kodak provides, the question is whether the article can include a power-law fit for the data as a model relating correct and metered times, while referencing only the book. (Please correct me if that isn't right.)
From that, I think that Dicklyon is correct that the only way to include this information would be to reproduce the table of data here (if licensing permits), or to simply reference where to find the table as is done in the current revision; we could not provide our own fit to the data, regardless of how good or sensible the fit seems to be. There are any number of possible "good" fits to a set of empirical data points and we have no way of choosing which to provide here. If a reader wants to make that conclusion for their own uses, then we have supplied that data table (or a reference to it) for them to do so, but we can't do it for them unless we have a reliable source that explicitly gives that model, and it doesn't sound like the Kodak book does that. AlekseyFy (talk) 06:08, 16 April 2009 (UTC)[reply]
Encyclopedias produce clear, useful summaries or explanations of often complex topics. Editors choose between them or adjust them to fit the facts better, or also, to make them more readable. Substitute "data" for "facts" and "easily used" for "readable" to get a consistent policy on models. It is well within the duties of editors to produce models for those purposes. It's no longer an esoteric enterprise that only PhD scientists should do. If Dick has an easy-to-use equation for the solid red line, he should put it in the article, as the useful summary it is. Otherwise, I'd say find the equation for the one splitting the difference between the green and dashed lines and put that in the article as a reliable and useful summary of the table. That only some editors have the skills to do that should be no impediment. Not all can read, write, and translate French either. (Add a graph like the one above, too.) -65.246.126.130 (talk) 18:39, 16 April 2009 (UTC)[reply]
65, my model has an added offset, much like the sourced one that I added to the Schwarzschild law section, such that short times are unaffected; but in this form, for converting metered time to time needed, I know of no source for such a model, so it's not appropriate to include (unless we find one). Dicklyon (talk) 03:51, 17 April 2009 (UTC)[reply]
First, please don't do reverts on material that is being discussed in talk; this page is hovering very close to WP:3RR. Second, we already have consistent policies that cover situations like this. Reading from WP:OR:
Wikipedia does not publish original research or original thought. This includes unpublished facts, arguments, speculation, and ideas; and any unpublished analysis or synthesis of published material that serves to advance a position.
Take care, however, not to go beyond what is expressed in the sources or to use them in ways inconsistent with the intent of the source, such as using material out of context. In short, stick to the sources.
The emphasis is mine. I'm not saying that a proper model would not be a clear or useful addition to the article. All I'm saying is that until we have one that is correctly sourced instead of just something we are making up here, we can't include it. We have to stick to the sources. AlekseyFy (talk) 19:09, 16 April 2009 (UTC)[reply]
The plus signs (+) on the graph IMPLY (and I'm not just inferring it) a model of the sort represented by the lines on the graph; which one is best is for the editors to decide. -65.246.126.130 (talk) 22:17, 16 April 2009 (UTC)[reply]
The plus signs imply, or represent, Kodak's published piecewise-constant model. No other model is published for this data, except on user forums, and the one you chose from there is not a good one. But let's don't choose one; stick to reliable sources. The straight-line models shown are implied only if you're looking for a straight-line model in this particular nonlinear plot; it makes much more sense to do what the previously cited other forum page said, which is to make a log-log plot of the additional time required, rather than of the time correction factor; in that domain, a straight line will behave sensibly. But these models have to be invented; they're not implied by the data. Dicklyon (talk) 03:30, 17 April 2009 (UTC)[reply]
You know, of course, that can't be the true model, and how you persist on this strange path is beyond belief. The true phenomena are not piecewise. Kodak gave this only for convenience, and any of the continuous models above, by their very form, must be more accurate and extrapolatable into the exterior region that matters >70 sec. -Rotwechsel (talk) 04:28, 17 April 2009 (UTC)[reply]
Of course I agree that it could be useful to have a continuous model that sensibly extrapolates. The one you provided doesn't sensibly extrapolate to shorter times than 0.1 second, so it's not so great. But that's not why I object to it; Schwarzschild's law has the same problem, and I don't object to that, because it's sourced. Dicklyon (talk) 04:32, 17 April 2009 (UTC)[reply]
Also, could you do us the courtesy of letting us know whether you are the 65 or the 74 IP editor? Dicklyon (talk) 04:34, 17 April 2009 (UTC)[reply]
You know, of course, the exterior region < 0.2 second doesn't matter as the phenomenon doesn't exist there; the sources are enough for this article. -Rotwechsel (talk) 04:51, 17 April 2009 (UTC)[reply]
As a long time modeler of all kinds of phenomena, I find it important that models behave sensibly when extrapolated a bit. Having a reciprocity-failure model that continuously becomes reciprocity success at short times is more sensible than having one that needs a cutoff and leaves a "corner" in the overall model. But that's just me; what model I prefer is not relevant to the article, which is what we're supposed to be talking about. Dicklyon (talk) 05:04, 17 April 2009 (UTC)[reply]
If the model accurately summarizes some part of the article (which it does) and is very usable (which it is), and, you have ideas as to how to improve it, then what model you prefer, and why, is highly relevant to the article. Even newspaper editors now generate models to summarize data. We, too, inhabit this new IT-driven, 21st Century. -Rotwechsel (talk) 15:18, 17 April 2009 (UTC)[reply]
I don't think we inhabit that same world. Wikipedia policies like WP:V are very explicit in prohibiting such behavior. Dicklyon (talk) 17:06, 17 April 2009 (UTC)[reply]
...same world as you. Nowhere does it say anything about model*, equat*, math*, arithm*, or law*. A summary is not research. Don't be so timid. I'd back you (and others, too) for putting a useful little model in here. -Rotwechsel (talk) 19:59, 17 April 2009 (UTC)[reply]

Semi-protected

[edit]

I've had the article semi-protected, meaning that anonymous IP editors can't edit it for a while; this should give time for the E-77 to arrive, and then I can verify what's in it and fix the section appropriately. If you guys want to participate, I recommend making accounts. Dicklyon (talk) 03:41, 17 April 2009 (UTC)[reply]

Solved

[edit]

User:Mbhiii has kindly found and added a source for a power-law compensation model; so we can include it and no longer have to cite a forum for it. Dicklyon (talk) 20:14, 17 April 2009 (UTC)[reply]

"Just as those who practice the same profession recognize each other instinctively, so do those who practice the same vice." - Marcel Proust —Preceding unsigned comment added by 65.246.126.130 (talk) 21:05, 17 April 2009 (UTC)[reply]
Not sure what your point is, but nice quote. Now Mbhiii is still adding some additional unsourced interpretation, trying to patch up an obvious deficiency in the model that is short at one parameter. However, his fix pins the breakpoint at 1 second. A reason alternative is to let the breakpoint float and measure time in units of it. Then no explicit correction factor is needed. If nobody has stated such a correction one way or the other, why should we make one up and put it into the article? Dicklyon (talk) 23:53, 17 April 2009 (UTC)[reply]
It's too easy to get confused about how to use an equation like this (just look at the last entry by Ryan Buckley in "Reciprocity tests of Kodak UC100 and Fuji Reala"). Better recheck the fix. There's no breakpoint. F^(p-1) is constant in time. -74.162.130.58 (talk) 04:23, 18 April 2009 (UTC)[reply]
The breakpoint is what you need to add to make this power law model make sense and it's at t = 1, because for smaller t than 1 it predicts a correct time less than the metered time. This can be 1 second, but could just as well be 1 minute or any other time unit, but all we know, since the model is not of a form to control this independent of the chosen time units. Dicklyon (talk) 06:32, 18 April 2009 (UTC)[reply]
OK, now I see what you mean by breakpoint; F^(p-1) just keeps it, in effect, in the right place. If p is derived for Tc and Tm in seconds, multiplying by 60^(p-1) guarantees Tc<Tm still occurs only for t<1 second, when using Tc and Tm in minutes. -74.162.151.39 (talk) 15:03, 18 April 2009 (UTC)[reply]
"just keeps it in the right place" would only be true if the right place was always 1 second; on the page you cite, the correct place is more like 50 seconds; you'd be better off changing the minutes. Your correction just makes the model less accurate for the films described at that forum. Dicklyon (talk) 22:47, 24 April 2009 (UTC)[reply]

Sockpuppetry

[edit]

It turns out that User:Mbhiii and User:65.246.126.130 are the same editor. If this were to be admitted and dealt with, it would make discussions here easy. If it's not, then we might suspect a violate of WP:SOCK. Dicklyon (talk) 17:24, 18 April 2009 (UTC)[reply]

Wrong, there are at least a few different editors from this IP, but not on this article, as far as I can tell. -65.246.126.130 (talk) 22:27, 21 April 2009 (UTC)[reply]
Are you saying that you are not the same as Mbhiii? Do we need to do a checkuser on you? Dicklyon (talk) 00:47, 22 April 2009 (UTC)[reply]

It's also very likely that User:Rotwechsel is the same of the many 74... IP addresses. I've asked above for clarification, but none has been forthcoming. Dicklyon (talk) 17:26, 18 April 2009 (UTC)[reply]

I've started an investigation on the lot of you at Wikipedia:Sockpuppet_investigations/Mbhiii. Dicklyon (talk) 01:22, 22 April 2009 (UTC)[reply]

"The substative issue"

[edit]

One of Mbhiii's many socks asks that I "address he substantive issues". OK, here it is: besides the fact that basing edits on random comments in forums is against policy, the particular point that Mbhiii is most recently trying to make is that the formula for corrected time as a power of metered time can be made to be more generally correct if an extra factor is included to compensate for using units different from seconds. This is one possibility, but stating it simply gives more credence to the erroneous idea that the formula is somehow sensible in the first place. In particular, it pins the "break point" between reciprocity region and reciprocity failure region at 1 second.

A perfectly reasonable alternative (also unsourced) would be allow different time units, without a factor to correct back to 1 second, such that the break point could be moved to 1 arbitrary time unit. Or, as he tried to put into the article a few weeks ago, use a formula with an explicit second parameter in the form of a multiplier, so that the units and the breakpoint can be made independent.

Even better, make the break point explicit and use the formula . More sensible, but still unsourced, and still lame, so let's don't.

An even more sensible formula would be this fix applied to the one in the forum that he links (here), which is not the same as the formula that's sourced; that is, the power law formula should be the extra time, not the corrected time; then no explicit breakpoint is needed and you get a continuous sensible model. For the films studied, the "break point" times for both films are much longer than 1 second, and two different factors are needed. This page does NOT provide an example of the sourced formula being wrongly used by using minutes instead of seconds – the sourced formula would be very wrong either way; actually closer in minutes, though.

The formula t' = tm + 0.055*(tm^1.73) can be rewritten as t' = tm * (1 + (tm/tb)^0.73), for tb = 53.2 seconds; and if the equation is written that way, then the times can all be measured in whatever units you want (all the same, of course). The breakpoint represents the point where the added time equals the metered time (1 stop of correction if looked at on the time axis); since it's nearly a minute, the one-parameter formula that's in the reliable source would actually work much better in minutes than in seconds; it would say no correction at 1 minute, the corner, which would be an error about about 1 stop, but then would be relatively accurate after that, instead of being way wrong for all times between 1 second and 1 minute. So Mbhiii's attempt to "correct" the gross error only makes the formula work worse, by pretending it is sensible.

Lacking any sources, adding material that shores up the worst of the models is just lame, and stupid, not just against policy. Adding material to demonstrate any of these better models is not stupid, but still against policy, so let's just leave well enough alone unless we find more sources of stuff to report. Dicklyon (talk) 04:31, 24 April 2009 (UTC)[reply]

Third opinion, second request

[edit]

Mbhiii asked for another third opinion, but didn't make a section for it. Here's one. The question is as discussed above: should we add an original "simple algebraic correction" to a model, based on a question about a different but related model, in a forum, even though the resulting corrected model is still not compatible with the data and model in that forum? 04:58, 24 April 2009 (UTC)

What Dick thinks is sensible is some model other than the one sourced (as he details above). Any notes to make the current model more useful by avoiding a common error is something he opposes (ibid). Please review the following and render an opinion or find someone who will. Thanks. (Note, this is not affected by the 3O given earlier by User:AlekseyFy. That model has since been sourced to a book, ending that controversy, as discussed at "Solved". Rather, it has to do with a common misuse of that type of model, for which the only source, so far, is an error on a forum, and its simple algebraic correction. See also the history and recall citing a forum is not absolute; in this case, it's for someone's calculation error.) 21:04, 24 April 2009 (UTC)
Mbhiii, what I oppose is not "Any notes to make the current model more useful by avoiding a common error", but rather any unsourced expansion on this very poor model, and particularly a change that as I argue above can be seen as making the model less useful by tieing it explicitly to the arbitrary and usually wrong break-point time of 1 second. Also, be aware that the "error" in the forum is about a completely different and more useful model that would not have this problem, where the correction would indeed make some sense; it is much less applicable to the sourced model that we have now. As I've said before, my own preferences here are not an issue if we stick to the policy of verifiability; what I really would prefer is to find more good sourced information that we can add, especially on better models; a lame comment in a forum, misapplied to a different model where it does more harm than good, doesn't qualify. Dicklyon (talk) 22:32, 24 April 2009 (UTC)[reply]
Dithering about the break-point at 1 sec. is useless, because, as you know, already, reciprocity compensation is really only needed for times > 1 sec., with film latitude making up any differences. (...what microscope manufacturers seem to assume, too.) Also, as you know, as well, the exact same correction applies to the power law component in the forum model. -MBHiii (talk) 20:54, 25 April 2009 (UTC)[reply]
I'm not dithering; I'm saying consistently that we must not add unsourced extensions, interpretations, and so-called "corrections" to the article. The rest was just to help you understand why if we did it would not actually be very helpful, and might even be very harmful in more firmly supporting the worst aspect of the sourced model, which is its peculiar tie to the 1-second time unit used. Dicklyon (talk) 22:03, 25 April 2009 (UTC)[reply]
Repeating this, over and over again, shouldn't make it any more believable. -MBHiii (talk) 17:30, 27 April 2009 (UTC)[reply]
What is it that you find hard to believe? That we shouldn't add unsourced interpretive stuff to the article? Or that the reasons for the details I presented here are to help you understand the issues? Or that the model you are tying to shore up ties the break-point for the reciprocity failure region to 1 second, or 1 of whatever time unit is used? Dicklyon (talk) 18:05, 27 April 2009 (UTC)[reply]
This is like dealing with "When did you stop beating your wife?" One has to waste too much time on your unacceptable assumptions. -MBHiii (talk) 18:44, 27 April 2009 (UTC)[reply]

Third opinion. Unsourced corrections can't be applied to sourced formulas. A reliable, verifiable source is required. If a reader won't benefit from a formula as it appears in a source, the formula should not be used here. Binksternet (talk) 21:28, 28 April 2009 (UTC)[reply]

Modified formula

[edit]

I am concerned with the following paragraph

Since the Schwarzschild's law formula gives unreasonable values for times in the region where reciprocity holds, a modified formula has been found that fits better across a wider range of exposure times. The modification is in terms of a factor the multiplies the ISO film speed:[1]

Relative film speed

where the t + 1 term implies a breakpoint near 1 second separating the region where reciprocity holds from the region where it fails.

The statement here is affected, in my opinion, by the following problems:
1 - The formula is not correctly described, since clearly the constant 1 in (t+1) is measured in seconds, but nowhere is stated that t has to be in seconds.
1.1 - Magic numbers are always suspicious: why one second and not a constant depending on the type of film? I guess that this formula won't work, for example, for printing paper.
2 - The value of p here is the reciprocal of the value of p in the next paragraph (well, this may actually be the next paragraph's fault, anyhow it is confusing)
3 - The source cited is an amateur book which, moreover, has its math wrong (in fact, the author states that two formulae are the same while they are just approximately the same).
3.1 - The author of that book is not an expert (at least in the academic sense), and, as far as I know, he may have invented these formulae himself.
4 - The modified model is just an obvious correction to the Schwarzschild power-law.
—Preceding unsigned comment added by 87.2.56.67 (talk) 14:05, 30 July 2009 (UTC)[reply]

References

  1. ^ Michael A. Covington (1999). Astrophotography for the amateur. Cambridge University Press. p. 181. ISBN 9780521627405.

Sensitizing dye

[edit]

There seems to be little on the physics of sensitizing dyes in photography. Remember that silver bromide is naturally sensitive to blue and shorter wavelength. In semiconductor physics, that means photons with energy greater than the band gap. Sensitizing dyes allow for other wavelengths. Now, in addition to the question of how many photons it requires for the silver bromide, one must ask how many photons are needed for the dye. I suspect, but don't have a reference for, that much of reciprocity failure is related to sensitizing dyes. In the case of long exposures, the photon energy can be lost before the next one comes along. I also suspect, but again have no reference for, that the short exposure reciprocity failure is related to the dyes. The dye has to absorb one photon, adjust appropriately, then a second one. If the two come too close together the dye can't absorb them. That would explain the color shift in color film, too. It would be nice to have some references, though. Gah4 (talk) 08:20, 16 July 2013 (UTC)[reply]