Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Subindependence

From Wikipedia, the free encyclopedia

In probability theory and statistics, subindependence is a weak form of independence.

Two random variables X and Y are said to be subindependent if the characteristic function of their sum is equal to the product of their marginal characteristic functions. Symbolically:

This is a weakening of the concept of independence of random variables, i.e. if two random variables are independent then they are subindependent, but not conversely. If two random variables are subindependent, and if their covariance exists, then they are uncorrelated.[1]

Subindependence has some peculiar properties: for example, there exist random variables X and Y that are subindependent, but X and αY are not subindependent when α ≠ 1[1] and therefore X and Y are not independent.

One instance of subindependence is when a random variable X is Cauchy with location 0 and scale s and another random variable Y=X, the antithesis of independence. Then X+Y is also Cauchy but with scale 2s. The characteristic function of either X or Y in t is then exp(-s·|t|), and the characteristic function of X+Y is exp(-2s·|t|)=exp(-s·|t|)2.

Notes

[edit]
  1. ^ a b Hamedani & Volkmer (2009)

References

[edit]
  • G.G. Hamedani; Hans Volkmer (2009). "Letter". The American Statistician. 63 (3): 295. doi:10.1198/tast.2009.09051.

Further reading

[edit]
  • Hamedani, G.G.; Walter, G.G. (1984). "A fixed point theorem and its application to the central limit theorem". Archiv der Mathematik. 43 (3): 258–264. doi:10.1007/BF01247572.
  • Hamedani, G.G. (2003). "Why independence when all you need is sub-independence". Journal of Statistical Theory and Applications. 1 (4): 280–283.
  • Hamedani, G. G.; Volkmer, Hans; Behboodian, J. (2012-03-01). "A note on sub-independent random variables and a class of bivariate mixtures". Studia Scientiarum Mathematicarum Hungarica. 49 (1): 19–25. doi:10.1556/SScMath.2011.1183.