### [A, SfS] Chapter 2: Probability: 2.1: Probability Principles

### Probability Principles

Probability principles

In this lesson, you will learn the basic probability principles:

- Definitions
- Axioms
- Rules

#\text{}#

Experiments

In the mathematical field of probability, we define an **experiment** as any process whose outcome cannot be predicted with certainty.

In probability, examples of experiments include:

- The flipping of a coin
- Counting the number of errors in a manuscript
- Choosing a number in an interval
- Weighing a newborn calf
- Detecting the presence of disease
- Waiting for a train

#\text{}#

For any experiment, there are at least two possible outcomes, and possibly infinitely many possible outcomes.

Sample Space

The set of all possible outcomes is the **sample space**, denoted #S#.

A coin flip has the following sample space:

\[S = \{\text{Heads, Tails}\} = \{H, T\}\]

If you are counting the number of errors in a manuscript, then

\[S = \{0, 1, 2, \ldots\}\]

which indicates an infinite sample space. Of course there must be a limit to the number of possible errors; but since we don't know what the limit is we don't specify one.

If you are waiting for a train, the time (in seconds) until it arrives is uncertain. For this experiment:

\[S = (0, \infty)\]

This is an interval of real numbers, so the sample space must include numbers like #100\pi# or #\sqrt{786}#. If however you are rounding the time to the nearest second, then:

\[S = \{1,2,3,\ldots\}\]

Also, if you are certain the train must arrive within #10# minutes, then:

\[S = (0, 600]\]

Or, if you are rounding:

\[S = \{1,2, \ldots, 600\}\]

#\text{}#

Event

Any subset of the sample space is called an **event**.

An event can also be the entire sample space, or even an empty set (i.e. a set with no outcomes).

Notation

The notation for an event is usually a capital letter from the first half of the Latin alphabet, such as #A#, #B#, or #C#.

Alternatively, if the number of events is large, events can be listed with subscripts such as #A_1, A_2, A_3,\ldots#.

For example, if the experiment is “get graded for a calculus test” then the sample space is:

\[S = \{0,1,2,...,100\}\]

and the event “pass the calculus test” is the event:

\[A = \{55,56,...,100\}\]

#\text{}#

Complement

The **complement **of an event #A# is the event consisting of every outcome in #S# which is **not** in #A#.

Notation

The symbol for the complement of #A# is #A^c#.

For the above experiment of “get graded for a calculus test”, the complement of #A# is:

\[A^c = \{0,1,2,...,54\}\]

i.e., the event: “fail to pass the calculus test”.

#\text{}#

Union

The **union **of two events #A# and #B# is the set which contains every outcome which occurs in event #A# **or** in event #B# (or in both).

You can have the union of any finite number of events, or even of an infinite list of events.

Notation

The mathematical symbol for the union of #A# and #B# is #A\cup B#.

#\text{}#

Intersection

The **intersection **of two events #A# and #B# is the set which contains every outcome which occurs in both event #A# **and** in event #B#.

You can have the intersection of any finite number of events, or even of an infinite list of events.

Notation

The mathematical symbol for the intersection of #A# and #B# is #A\cap B#.

For example, if #A = \{\text{green},\text{blue},\text{red},\text{orange}\}# and #B = \{\text{yellow},\text{black},\text{red}\}#, then:

- #A\cup B = \{\text{green}, \text{blue}, \text{red}, \text{orange}, \text{yellow}, \text{black}\}#
- #A\cap B = \{\text{red}\}#

#\text{}#

Mutually Exclusive

Two or more events are **mutually exclusive** (also called **disjoint**) if they have no outcomes in common, i.e., if their intersection is empty.

For example, if #C = \{\text{white}, \text{purple}, \text{pink}\}# then #A# and #C# are mutually exclusive, and also #B# and #C# are mutually exclusive.

#\text{}#

Probability

Each event is assigned a number from the interval #[0,1]# which is its **probability**. This number represents how likely it is that the event will occur when the experiment is performed.

If the probability is close to #1#, then the event is very likely to occur, whereas if the event is close to #0#, then the event is very unlikely to occur. A probability of #0.5# means the event is just as likely to occur as to not occur (as in, for example, getting heads on a coin flip).

Notation

For some event #A#, we use #\mathbb{P}(A)# to denote the probability of event #A#.

For example, if the experiment is “flip a coin” and event #H# is the event that the coin lands on Heads, then we write #\mathbb{P}(H) = 0.5#, or we could write #\mathbb{P}(\text{heads}) = 0.5# to be more explicit.

#\text{}#

Three Axioms of Probability

There are three **axioms of probability** upon which the rest of the field of probability is constructed:

- Let #S# denote the sample space of an experiment. Then #\mathbb{P}(S) = 1#. In other words, it is certain that at least one outcome in #S# will occur.
- Let #A# denote any event. Then #0 \leq \mathbb{P}(A) \leq 1#.
- Let #A# and #B# be mutually exclusive events, then:

\[\mathbb{P}(A\cup B) = \mathbb{P}(A) + \mathbb{P}(B)\] More generally, if #A_1,A_2,A_3,\ldots# are mutually exclusive events, then:

\[\mathbb{P}(A_1\cup A_2 \cup A_3 \cup …) = \mathbb{P}(A_1) + \mathbb{P}(A_2) + \mathbb{P}(A_3) + \ldots\]

Probability Rules and Laws

From these axioms we can conclude:

- For any event #A#, the following property holds:

\[\mathbb{P}(A^c) = 1 - \mathbb{P}(A)\] We call this the**complement rule**. - Let #\varnothing# denote the empty set, i.e., the set with no outcomes, then:

\[\mathbb{P}(\varnothing) = 0\] - If #A# and #B# are any two events (not necessarily mutually exclusive), then:

\[\mathbb{P}(A\cup B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)\] We call this the**addition rule**. - If #A# and #B# are any two events (not necessarily mutually exclusive), then:

\[\mathbb{P}(A) = \mathbb{P}(A\cap B) + \mathbb{P}(A\cap B^c)\] and \[\mathbb{P}(B) = \mathbb{P}(B\cap A) + \mathbb{P}(B\cap A^c)\] This is called the**Law of Total Probability**.

For example, let #A# denote the event that a randomly-selected person from a specified population is a carrier of the *toxoplasma gondii* parasite.

Let #B# denote the event that a randomly-selected person from the same population suffers from asthma.

Suppose #\mathbb{P}(A) = 0.21# and #\mathbb{P}(B) = 0.14#. Suppose further that #\mathbb{P}(A\cap B) = 0.08#.

**Example 1: **What is the probability that a randomly-selected person from this population does not suffer from either of these two conditions?

We can first use the addition rule to find the probability the person does suffer from at least one of these two conditions:

\[\mathbb{P}(A\cup B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A\cap B) = 0.21 + 0.14 - 0.08 = 0.27\] Then, using the complement rule, we can find the probability that the person does not suffer from either of these two conditions: \[\mathbb{P}\Big((A\cup B)^c\Big) = 1 - \mathbb{P}(A\cup B) = 1 - 0.27 = 0.73\]

**Example 2:** What is the probability that a person is a carrier of *t. gondii* but does not suffer from asthma?

This means we want #\mathbb{P}\Big(A\cap B^c\Big)#. The *Law of Total Probability* tells us that:

\[\begin{array}{rcccl}

\mathbb{P}(A) &=& \mathbb{P}(A\cap B) &+& \mathbb{P}\Big(A\cap B^c\Big)\\

0.21 &=& 0.08 &+& \mathbb{P}\Big(A\cap B^c\Big)

\end{array}\] So #\mathbb{P}\Big(A\cap B^c\Big) = 0.21 - 0.08 = 0.13#.

#\text{}#

If the number of outcomes in the sample space of an experiment is a finite number #N#, it might be the case that each of those outcomes has the *same probability of occurring.*

Equally-Likely Outcomes

If we run an experiment with #N# possible outcomes and the probability of each outcome equals #\cfrac{1}{N}#, then we say that we have **equally-likely outcomes**.

For example, if we roll a single cube with the number of dots on each face ranging from #1# to #6#, then there is a probability of #1/6# for each face to occur.

Or if we are selecting one person at random from a list of #20# people, then there is a probability of #1/20# for each person to be selected.

#\text{}#

If event #A# consists of #k# outcomes among the #N# equally-likely outcomes, then:

\[\mathbb{P}(A) = \cfrac{k}{N}\]

For example, suppose a bag contains #30# pieces of candy which cannot be distinguished by touch, only by appearance. So if you reach into the bag without looking and select one piece of candy, there are #30# equally-likely outcomes.

If exactly #8# of those pieces are green, then the probability you will select a green piece of candy is:

\[\mathbb{P}(\text{green}) = \cfrac{8}{30} = \cfrac{4}{15} \approx 0.267 \]

#\text{}#

Using R

**Creating a vector in R**

You can create a vector in #\mathrm{R}# which contains the potential outcomes of an experiment.

For example, if someone is randomly selecting one of the #7# colors of the rainbow, the outcomes can be put into a vector named “Color” in one of the following ways:

> Color = c("Red","Orange","Yellow","Green","Blue","Indigo","Violet")

> Color = c("R","O","Y","G","B","I","V")

> Color = 1:7

In this latter option, you would have to decide how to code the #7# colors with the integers from #1# to #7#.

**Randomly selecting an outcome in R without replacement**

You could randomly select an outcome using the #\mathrm{R}# command #\mathtt{sample()}# as follows:

`> sample(Color,1)`

If you wanted to randomly select #4# outcomes, use:

`> sample(Color,4)`

This method will not allow the same outcome to the chosen more than once. We call it sampling *without replacement* (once a color is selected, it is not placed back into the available outcomes).

**Randomly selecting an outcome with** **replacement**

If you want sampling *with replacement*, so that the same outcome can be selected multiple times, use:

`> sample(Color,4,replace=TRUE)`

or it suffices to write:

`> sample(Color,4,r=T)`

**Assigning**** probabilities to outcomes**

In the above situations, each outcome has the same probability of being selected. If you want to give the outcomes different probabilities, then you have to specify these with a vector of probabilities, one for each outcome, so that the probabilities sum to one. For example:

`> sample(Color,1,p=c(0.1,0.2,0.2,0.15,0.1,0.05,0.2))`

or

`> sample(Color,20,r=T,p=c(0.1,0.2,0.2,0.15,0.1,0.05,0.2))`

**Pass Your Math**independent of your university. See pricing and more.

Or visit omptest.org if jou are taking an OMPT exam.