Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Complementary event

From Wikipedia, the free encyclopedia

In probability theory, the complement of any event A is the event [not A], i.e. the event that A does not occur.[1] The event A and its complement [not A] are mutually exclusive and exhaustive. Generally, there is only one event B such that A and B are both mutually exclusive and exhaustive; that event is the complement of A. The complement of an event A is usually denoted as A′, Ac, A or A. Given an event, the event and its complementary event define a Bernoulli trial: did the event occur or not?

For example, if a typical coin is tossed and one assumes that it cannot land on its edge, then it can either land showing "heads" or "tails." Because these two outcomes are mutually exclusive (i.e. the coin cannot simultaneously show both heads and tails) and collectively exhaustive (i.e. there are no other possible outcomes not represented between these two), they are therefore each other's complements. This means that [heads] is logically equivalent to [not tails], and [tails] is equivalent to [not heads].

Complement rule

[edit]

In a random experiment, the probabilities of all possible events (the sample space) must total to 1— that is, some outcome must occur on every trial. For two events to be complements, they must be collectively exhaustive, together filling the entire sample space. Therefore, the probability of an event's complement must be unity minus the probability of the event.[2] That is, for an event A,

Equivalently, the probabilities of an event and its complement must always total to 1. This does not, however, mean that any two events whose probabilities total to 1 are each other's complements; complementary events must also fulfill the condition of mutual exclusivity.

The complement of any event A. Event A and its complement fill the entire sample space.

Example of the utility of this concept

[edit]

Suppose one throws an ordinary six-sided die eight times. What is the probability that one sees a "1" at least once?

It may be tempting to say that

Pr(["1" on 1st trial] or ["1" on second trial] or ... or ["1" on 8th trial])
= Pr("1" on 1st trial) + Pr("1" on second trial) + ... + P("1" on 8th trial)
= 1/6 + 1/6 + ... + 1/6
= 8/6
= 1.3333...

This result cannot be right because a probability cannot be more than 1. The technique is wrong because the eight events whose probabilities got added are not mutually exclusive.

One may resolve this overlap by the principle of inclusion-exclusion, or, in this case, by simply finding the probability of the complementary event and subtracting it from 1, thus:

Pr(at least one "1") = 1 − Pr(no "1"s)
= 1 − Pr([no "1" on 1st trial] and [no "1" on 2nd trial] and ... and [no "1" on 8th trial])
= 1 − Pr(no "1" on 1st trial) × Pr(no "1" on 2nd trial) × ... × Pr(no "1" on 8th trial)
= 1 −(5/6) × (5/6) × ... × (5/6)
= 1 − (5/6)8
= 0.7674...

See also

[edit]

References

[edit]
  1. ^ Robert R. Johnson, Patricia J. Kuby: Elementary Statistics. Cengage Learning 2007, ISBN 978-0-495-38386-4, p. 229 (restricted online copy, p. 229, at Google Books)
  2. ^ Yates, Daniel S.; Moore, David S.; Starnes, Daren S. (2003). The Practice of Statistics (2nd ed.). New York: Freeman. ISBN 978-0-7167-4773-4. Archived from the original on 2005-02-09. Retrieved 2013-07-18.
[edit]