Introduction to Probability


When an experiment is repeated under similar and controlled conditions, we typically come across two types of situations:

  1. The outcome is unique or certain.

  2. The outcome is not unique and may vary.

Deterministic Events

These are events where the outcome is predictable with certainty if the initial conditions are known. The same conditions always produce the same result.

Examples:

  • Calculating the area of a rectangle when length and breadth are known.

  • Water boiling at 100°C under normal atmospheric pressure.

  • A ball dropped from a height in a vacuum will fall due to gravity.

Probabilistic Events

These are events where the outcome is uncertain and may vary even under the same conditions. We can only estimate the chances of each possible result.

Examples:

  • Tossing a fair coin (Outcome: Heads or Tails, each with 0.5 probability).

  • Rolling a die (Possible outcomes: 1 to 6).

  • Predicting tomorrow's weather (e.g., 70% chance of rain).

Definitions of Various Terms in Probability

In the study of probability, several key terms help us describe and analyze uncertain events. Below are some fundamental definitions:


1. Trial and Event

When an experiment is repeated under identical or similar conditions but does not produce a unique outcome, it is called a trial.
Each possible outcome of that trial is called an event.

Examples:

  • (i) Throwing a die is a trial. Getting 1, 2, 3, 4, 5, or 6 is an event.

  • (ii) Tossing a coin is a trial. Getting a Head (H) or a Tail (T) is an event.

  • (iii) Drawing two cards from a well-shuffled pack is a trial. Getting a king and a queen is an event.


2. Exhaustive Events (or Exhaustive Cases)

The total number of all possible outcomes of a trial is known as the set of exhaustive events or exhaustive cases.

Examples:

  • (i) In tossing a coin, there are 2 exhaustive cases: Head and Tail.
    (We ignore the rare case where the coin stands on its edge.)

  • (ii) In throwing a die, there are 6 exhaustive cases: 1, 2, 3, 4, 5, and 6.

  • (iii) In drawing 2 cards from a deck of 52, the exhaustive number of cases is
    (522)=1326\binom{52}{2} = 1326 since 2 cards can be chosen in 1326 different ways.

  • (iv) In throwing two dice, the exhaustive number of cases is
    6×6=36, as each of the 6 outcomes of the first die can be paired with any of the 6 outcomes of the second die.

In general in throwing of n dice the exhaustive number of cases is 6^n.

Favourable Events (or Cases)

The number of cases favourable to an event is the count of all outcomes that result in the happening of that event.

Examples:

  • (i) In drawing one card from a deck of 52:

    • Number of favourable cases for drawing an ace = 4 (one from each suit).

    • Number of favourable cases for drawing a spade = 13.

    • Number of favourable cases for drawing a red card = 26 (13 hearts + 13 diamonds).

  • (ii) In throwing two dice, the favourable cases for getting a sum of 5 are:
    (1, 4), (4, 1), (2, 3), (3, 2) → Total = 4 favourable cases.


Mutually Exclusive Events

Two or more events are said to be mutually exclusive (or incompatible) if the occurrence of one event rules out the occurrence of the others in the same trial.
In other words, they cannot happen at the same time.

Examples:

  • (i) In throwing a die, the outcomes 1, 2, 3, 4, 5, 6 are mutually exclusive. If you get a 3, you cannot get any other number in the same throw.

  • (ii) In tossing a coin, the outcomes Head (H) and Tail (T) are mutually exclusive. Both cannot occur in a single toss.


Equally Likely Events

Events are equally likely if, given all relevant evidence, there is no reason to expect one outcome more than another.

Examples:

  • (i) In tossing a fair coin, the outcomes Head and Tail are equally likely (each with probability ½).

  • (ii) In throwing a fair die, all six faces (1 to 6) are equally likely (each with probability 1⁄6).


Independent Events

Two or more events are independent if the occurrence or non-occurrence of one event does not affect the probability of the other.

Examples:

  • (i) In tossing a coin multiple times:

    • Getting a head in the first toss is independent of the results in the second, third, etc.

  • (ii) In drawing a card from a well-shuffled deck with replacement:

    • The outcome of the second draw is independent of the first.

But, if the first card is not replaced, then the second draw depends on the first — the events are dependent

🎯 Classical (Mathematical / A Priori) Definition of Probability

This definition is used when:

  • All outcomes of an experiment (trial) are known in advance,

  • They are mutually exclusive (no two happen at the same time),

  • And equally likely (all outcomes have the same chance).

Definition:

If an experiment has:

  • n = Total number of exhaustive, equally likely, and mutually exclusive outcomes,

  • m = Number of outcomes favourable to an event E,

Then the probability of event E happening is:

P(E)=mn

Example 1: Tossing a fair coin

  • Outcomes: Head (H), Tail (T) → n=2

  • Favourable outcomes for event “getting Head” → m=1

P(Head)=12P(\text{Head}) = \frac{1}{2}

Example 2: Throwing a fair die

  • Outcomes: 1 to 6 → n=6

  • Favourable outcomes for event “getting a 4” → m=1

P(4)=16

Example 3: Drawing a red card from a well-shuffled deck

  • Total cards: 52 → n=52n = 52

  • Red cards: 26 → m=26m = 26

P(Red)=2652=12P(\text{Red}) = \frac{26}{52} = \frac{1}{2}

🔁 Complement of an Event (Non-happening of E)

If:

  • n = Total number of exhaustive, equally likely outcomes

  • m = Number of favourable outcomes for event E

Then:

  • The number of outcomes not favourable to event E = nm

So, the probability that event E does not happen is:

q=P(not E)=nmnq = P(\text{not } E) = \frac{n - m}{n}

Since:

P(E)=p=mnP(E) = p = \frac{m}{n}

We can also write:

q=1porp+q=1q = 1 - p \quad \text{or} \quad p + q = 1

Example:

Throwing a die, what is the probability of not getting a 5?

  • Total outcomes n=6

  • Favourable outcomes for “getting 5” m=1

  • So:

    p=16,q=116=56p = \frac{1}{6}, \quad q = 1 - \frac{1}{6} = \frac{5}{6}

Important Properties:

  • 0p10 \leq p \leq 1

  • 0q10 \leq q \leq 1

  • p+q=1p + q = 1

That means both probability of success (p) and failure (q) are between 0 and 1 (inclusive).

🔍 Remarks on Classical (Mathematical) Probability

1. Success and Failure

  • The probability that an event happens is denoted by p and is called the probability of success.

  • The probability that the event does not happen is denoted by q and is called the probability of failure.

  • Mathematically:

    p=P(E),q=1p=P(not E)p = P(E), \quad q = 1 - p = P(\text{not } E)

2. Certain and Impossible Events

  • If the probability of an event is 1, i.e.,

    P(E)=1,P(E) = 1,

    then the event E is certain to occur.
    Example: The sun rising in the east.

  • If the probability of an event is 0, i.e.,

    P(E)=0,P(E) = 0,

    then the event E is impossible.
    Example: Getting a 7 on a standard die.

⚠️ Limitations of the Classical Definition of Probability

The classical (a priori) definition assumes:

  • All outcomes are equally likely

  • The total number of outcomes is finite

However, this definition fails in the following cases:


1. When Outcomes Are Not Equally Likely

  • The classical approach requires that each outcome has the same chance of occurring.

  • If some outcomes are more likely than others, the formula

    P(E)=Number of favourable casesNumber of exhaustive casesP(E) = \frac{\text{Number of favourable cases}}{\text{Number of exhaustive cases}}

    is no longer valid.

Example:

Suppose the probability of a student passing an exam is being estimated. The outcomes — pass and fail — are not equally likely (e.g., based on preparation level, question paper difficulty, etc.).

Hence, classical probability cannot be applied.


2. When the Number of Possible Outcomes Is Infinite

  • Classical probability works only with a finite number of outcomes.

  • If the sample space is infinite, then counting favourable and total outcomes is not feasible.

Example:

Choosing a random real number between 0 and 1 — the number of possible outcomes is infinite.

Therefore, we need to use other probability definitions (like axiomatic or statistical) in such cases.

📊 Statistical or Empirical Probability

🔹 Definition (Von Mises)

When a trial is repeated multiple times under identical and homogeneous conditions, and an event occurs m times out of n trials, then the probability of the event is defined as:

P(E)=limnmnP(E) = \lim_{n \to \infty} \frac{m}{n}

  • That is, empirical probability is the long-run relative frequency of an event.

  • It is assumed that the limit exists and is finite and unique.

This definition is used when:

  • Outcomes are not equally likely, or

  • We cannot calculate probability theoretically.

🔸 Example : Leap Year and 53 Sundays

Problem:

What is the probability that a randomly selected leap year contains 53 Sundays?

Solution:

  • A leap year has 366 days = 52 full weeks + 2 extra days.

  • The extra 2 days can be:

    1. Sunday & Monday

    2. Monday & Tuesday

    3. Tuesday & Wednesday

    4. Wednesday & Thursday

    5. Thursday & Friday

    6. Friday & Saturday

    7. Saturday & Sunday
      ⇒ Total = 7 possible combinations

  • For a year to have 53 Sundays, one of the extra days must be a Sunday.

    • This happens in cases:
      (i) Sunday & Monday
      (vii) Saturday & Sunday
      ⇒ Number of favourable outcomes = 2

✅ Therefore,

P(Leap year has 53 Sundays)=27P(\text{Leap year has 53 Sundays}) = \frac{2}{7}

🎯 Example 4.2: Probability of Drawing One White and One Blue Ball

Problem:
A bag contains 3 red, 6 white, and 7 blue balls.
If two balls are drawn at random, what is the probability that one is white and the other is blue?


Solution:

Step 1: Total number of balls =

3 (red)+6 (white)+7 (blue)=163\ (\text{red}) + 6\ (\text{white}) + 7\ (\text{blue}) = 16

Step 2: Total number of ways to draw 2 balls out of 16:

(162)=16×152=120\binom{16}{2} = \frac{16 \times 15}{2} = 120

Step 3: Favourable cases: One white and one blue

  • Choose 1 white from 6 white balls:

    (61)=6\binom{6}{1}
  • Choose 1 blue from 7 blue balls:

    (71)=7\binom{7}{1} 
  • So, favourable outcomes 

    6×7=42

Step 4: Required Probability =

Favourable casesTotal cases=42120=720\frac{\text{Favourable cases}}{\text{Total cases}} = \frac{42}{120} = \frac{7}{20}

📌 Final Answer:

720\boxed{\frac{7}{20}}

Axiomatic Approach to Probability

📘 Introduction

The axiomatic approach to probability was introduced by Andrey Kolmogorov, a Russian mathematician, in 1933. This modern approach builds a rigorous mathematical foundation for probability using set theory and logic. It overcomes the limitations of the classical and empirical/statistical definitions of probability.


🔍 Why Axiomatic?

The classical and statistical approaches to probability had some limitations:

  • Classical probability assumes equally likely outcomes, which is not always realistic.

  • Statistical probability depends on large numbers of repeated experiments, which is not always possible.

The axiomatic approach defines probability based on a set of rules (axioms), independent of how probability is interpreted in real life.


📐 What Is an Axiomatic System?

In mathematics, an axiomatic system:

  • Starts with undefined terms and basic assumptions (axioms).

  • Uses these axioms to logically derive theorems.

  • These theorems are abstract but can be applied to real-world problems.


🧱 Kolmogorov's Axioms of Probability

Let SS be the sample space, and let AA be any event (a subset of SS). A probability function PP assigns a number to each event such that:

  1. Non-negativity:

    P(A)0for any event AP(A) \geq 0 \quad \text{for any event } A
  2. Normalization:

    P(S)=1(The probability of the sample space is 1)P(S) = 1 \quad \text{(The probability of the sample space is 1)}
  3. Additivity (for mutually exclusive events):
    If AB=A \cap B = \emptyset, then

    P(AB)=P(A)+P(B)P(A \cup B) = P(A) + P(B)

These three axioms form the basis of modern probability theory.


🧠 Key Points

  • The axiomatic approach is logical, abstract, and independent of real-world assumptions.

  • It is the foundation for all modern probability theory.

  • The classical and empirical definitions are just special cases within this broader framework.

  • Theorems derived from these axioms can later be interpreted and applied to real-world events.

Random Experiment and Sample Space

🎯 What Is a Random Experiment?

A random experiment is a process or activity that:

  • Can be repeated under identical conditions, and

  • Has uncertain outcomes that cannot be predicted exactly in advance.

Examples:

  • Tossing a coin

  • Rolling a die

  • Drawing a card from a shuffled deck

  • Conducting a scientific/agricultural test (e.g., testing fertilizer effects)

👉 Each repetition of the experiment is called a trial.


🎲 Outcome and Sample Space

  • The result of a trial is called an outcome or elementary event (also called a sample point).

  • The set of all possible outcomes is called the sample space.

💡 Examples of Sample Spaces:

  1. Tossing a coin:
    Sample space, S={Head,Tail}S = \{ \text{Head}, \text{Tail} \}

  2. Rolling a die:
    Sample space, S={1,2,3,4,5,6}S = \{ 1, 2, 3, 4, 5, 6 \}

  3. Drawing a card from a deck:
    Sample space, S includes all 52 cards


🔍 Formal Definition

Let a random experiment be denoted by EE, and let the possible outcomes be:

e1,e2,,ene_1, e_2, \dots, e_n

These outcomes satisfy the following:

  1. Mutually exclusive – no two outcomes occur at the same time.

  2. Exhaustive – one and only one outcome occurs in each trial.

Then the sample space is the set:

S={e1,e2,,en}S = \{ e_1, e_2, \dots, e_n \}

This set S is the foundation on which we define probabilities.


🧠 Why It Matters?

We need the idea of a sample space and random experiment because:

  • They help us model real-world uncertainty.

  • They allow us to use mathematical tools to calculate probabilities.

  • They support additive and frequency-based interpretations of probability.

For example:

  • The probability of getting a Head when tossing a fair coin is 12\frac{1}{2}.

  • The probability of getting either a 5 or a 6 when rolling a fair die is:

    16+16=13\frac{1}{6} + \frac{1}{6} = \frac{1}{3}

📌 Summary

  • A random experiment is any process with uncertain results.

  • An outcome is a single result of a trial.

  • The sample space is the set of all possible outcomes.

  • The sample space helps us assign probabilities in a logical and systematic way.


🔹 1. Universal Set

  • The sample space SS acts like a universal set for all outcomes related to a random experiment.

  • Any event or group of outcomes is a subset of this sample space.


🔹 2. Finite vs Infinite Sample Space

  • A sample space is called finite if it contains a limited number of outcomes.

    • Example: Tossing a coin once
      S={H,T}S = \{ H, T \} → 2 outcomes → finite

  • A sample space is infinite if it contains endless outcomes.

    • Example: Tossing a coin until the first Head appears
      S={H,TH,TTH,TTTH,}S = \{ H, TH, TTH, TTTH, \dots \} → continues forever → infinite

    Each outcome here represents:

    • HH: Head on first toss

    • THTH: Tail, then Head

    • TTHTTH: Two Tails, then Head
      and so on...


🔹 3. Discrete vs Continuous Sample Space

  • A discrete sample space contains either:

    • A finite number of outcomes, or

    • An infinite but countable number of outcomes (like natural numbers)

  • A continuous sample space has uncountably infinite outcomes, like all real numbers between 0 and 1.

    • Example: Measuring exact time, distance, etc.

🧠 Note: In this book, we are dealing only with discrete sample spaces.


📘 4 Event

🔹 What is an Event?

An event is:

  • Any non-empty subset of the sample space SS.

  • It may consist of one outcome, multiple outcomes, or even no outcome.

✍️ Formal Definition:

“Of all the possible outcomes in the sample space, some may satisfy a specific condition. The set of those outcomes is called an event.”


🔹 Types of Events

  1. Elementary Event:

    • Contains only one outcome

    • Example: In rolling a die, getting a 4
      A={4}A = \{4\}

  2. Impossible Event:

    • Contains no outcome

    • Represented by the empty set: \emptyset

    • Example: Rolling a die and getting an 8

  3. 🎯 Certain Event:

    • The event that includes all possible outcomes

    • Represented by the sample space itself: SS

    • Always happens when the experiment is performed

  4. 🎲 Compound Event:

    • Contains more than one outcome

    • Example: Getting an even number when rolling a die
      A={2,4,6}A = \{2, 4, 6\}


Summary Table

TermMeaningExample
Sample Space (S)    All possible outcomes for a die{1,2,3,4,5,6}\{1,2,3,4,5,6\}
Elementary Event    One single outcome{4}\{4\}
Compound Event    More than one outcome{2,4,6}\{2, 4, 6\}
Impossible Event    No outcome\emptyset
Certain Event    All outcomesSS

🔹 Example 1: Single Toss of a Coin

  • When you toss a coin once, the possible outcomes are:

    • H = Head

    • T = Tail

  • So the Sample Space is:

    S={H,T}S = \{H, T\}
  • The number of sample points is:

    n(S)=2n(S) = 2
  • Each of these outcomes (H and T) is an elementary event.


🔹 Example 2: Two Tosses of a Coin

  • Now we toss a coin two times.

  • The possible outcomes are:

    • First toss: H or T

    • Second toss: H or T

    • So the combinations are:

      • HH: Head in both tosses

      • HT: Head then Tail

      • TH: Tail then Head

      • TT: Tail in both tosses

  • Thus, the Sample Space is:

    S={HH,HT,TH,TT}S = \{HH, HT, TH, TT\}
  • Total number of outcomes:

    n(S)=4n(S) = 4

🔹 Example Event: Getting at Least One Head

  • To form an event, we select a subset of the sample space.

  • Let’s define the event:

    A = Getting at least one Head

  • The outcomes that satisfy this condition are:

    • HH

    • HT

    • TH

  • So, the event A is:

    A={HH,HT,TH}A = \{HH, HT, TH\}
  • Note:

    • TT is not included in A because it has no Head.

    • This event is a compound event because it has multiple outcomes.


🧠 Key Takeaway:

  • A sample space lists all possible outcomes of an experiment.

  • An event is any subset of that sample space.

Algebra of Events (Set Operations on Events)

Let A, B, C be events from a sample space S. The following rules apply:


🔹 (i) Union of Events

AB={ωS:ωA or ωB}A \cup B = \{\omega \in S : \omega \in A \text{ or } \omega \in B\}

🔸 Meaning: The event that either A or B or both occur.


🔹 (ii) Intersection of Events

AB={ωS:ωA and ωB}A \cap B = \{\omega \in S : \omega \in A \text{ and } \omega \in B\}

🔸 Meaning: The event that both A and B occur together.


🔹 (iii) Complement of an Event A

Ac={ωS:ωA}A^c = \{\omega \in S : \omega \notin A\}

🔸 Meaning: The event that A does not occur.


🔹 (iv) Difference of Events (A minus B)

AB={ωS:ωA but ωB}A - B = \{\omega \in S : \omega \in A \text{ but } \omega \notin B\}

🔸 Meaning: Outcomes that are in A only, not in B.


🔹 (v) Generalizations

  • For a collection of events A1,A2,A3,,AnA_1, A_2, A_3, \dots, A_n:

    • Union:

      i=1nAi=A1A2An\bigcup_{i=1}^{n} A_i = A_1 \cup A_2 \cup \dots \cup A_n

      Meaning: At least one of the events occurs.

    • Intersection:

      i=1nAi=A1A2An\bigcap_{i=1}^{n} A_i = A_1 \cap A_2 \cap \dots \cap A_n

      Meaning: All of the events occur together.


🔹 (vi) Subset Relation

AB    For every ωA,ωBA \subseteq B \iff \text{For every } \omega \in A, \omega \in B

🔸 Meaning: Every outcome in A is also in B.


🔹 (vii) Superset Relation

BA    ABB \supseteq A \iff A \subseteq B

🔹 (viii) Equality of Sets

A=B    AB and BAA = B \iff A \subseteq B \text{ and } B \subseteq A

🔸 Meaning: A and B contain exactly the same outcomes.


🔹 (ix) Disjoint Events (Mutually Exclusive)

AB=A \cap B = \emptyset

🔸 Meaning: A and B cannot occur together.


🔹 (x) Alternate Notation for Disjoint Union

If A and B are disjoint, then:

AB=A+BA \cup B = A + B

🔹 (xi) Symmetric Difference (Exactly One of A or B)

AB=(AB)(AB)A \triangle B = (A \cup B) - (A \cap B)

or equivalently:

AB=(AB)(BA)A \triangle B = (A - B) \cup (B - A)

🔸 Meaning: Outcomes that are in A or B but not both.

Remark. Since the events are subsets of S, all the laws of set theory viz., commutative laws. associative laws. distributive laws, DeMorgan's law. etc., hold for aIgebra of events.


Given: Three arbitrary events A, B, and C

We are to express certain compound events in terms of set operations (intersection ∩, union ∪, complement Aᶜ, etc.).


🔹 (i) Only A occurs

This means A occurs, but B and C do not.

ABcCc\boxed{A \cap B^c \cap C^c}


🔹 (ii) Both A and B occur, but not C

ABCc\boxed{A \cap B \cap C^c}


🔹 (iii) All three events occur

ABC\boxed{A \cap B \cap C}


🔹 (iv) At least one occurs

At least one of A, B, or C occurs means the union of all three.

ABC\boxed{A \cup B \cup C}


🔹 (v) At least two occur

This means any two or all three occur. So we combine the three pairwise intersections excluding cases where only one occurs:

(ABCc)(ABcC)(AcBC)(ABC)\boxed{ (A \cap B \cap C^c) \cup (A \cap B^c \cap C) \cup (A^c \cap B \cap C) \cup (A \cap B \cap C) }


🔹 (vi) Only one occurs

This means exactly one of A, B, or C occurs, and the others do not:

(ABcCc)(AcBCc)(AcBcC)\boxed{ (A \cap B^c \cap C^c) \cup (A^c \cap B \cap C^c) \cup (A^c \cap B^c \cap C) }


🔹 (vii) Exactly two occur (and no more)

Here, any two occur and the third does not:

(ABCc)(ABcC)(AcBC)\boxed{ (A \cap B \cap C^c) \cup (A \cap B^c \cap C) \cup (A^c \cap B \cap C) }


🔹 (viii) None occurs

This means A, B, and C all do not occur. So:

AcBcCc\boxed{A^c \cap B^c \cap C^c}

Or alternatively, the complement of “at least one occurs”:

(ABC)c\boxed{(A \cup B \cup C)^c}

📘 Mathematical Notion of Probability

When performing a random experiment, the sample space SS consists of all possible outcomes.

Let:

  • NN = total number of trials (or sample points),

  • NAN_A = number of times event A occurs.

🔹 Frequency Interpretation of Probability

In the long run (as NN \to \infty), the probability of event A, denoted P(A)P(A), is approximated by the relative frequency:

P(A)=NAN\boxed{P(A) = \frac{N_A}{N}}

This gives an empirical (frequentist) view of probability.


📘 Mathematical (Axiomatic) Definition of Probability

Since the frequency definition depends on observations and can't define probabilities purely mathematically, we need a formal definition.

We define a function PP called the probability function on a sample space SS, satisfying certain axioms.

Let:

  • D\mathcal{D} = a σ-field (sigma-field or sigma-algebra) of subsets of SS, i.e., the collection of all events,

  • ADA \in \mathcal{D} = any event (subset of S).


📌 Axioms of Probability (Kolmogorov's Axioms)

The probability function PP must satisfy:

  1. Non-Negativity (Positiveness)

    P(A)0for all AD\boxed{P(A) \geq 0} \quad \text{for all } A \in \mathcal{D}
  2. Normalization (Certainty)

    P(S)=1\boxed{P(S) = 1}

    That is, the probability that some outcome occurs is 1.

  3. Countable Additivity (Union Axiom)
    If A1,A2,A3,DA_1, A_2, A_3, \dots \in \mathcal{D} are mutually disjoint (i.e., AiAj=A_i \cap A_j = \emptyset for iji \neq j), then:

    P(i=1Ai)=i=1P(Ai)\boxed{P\left(\bigcup_{i=1}^{\infty} A_i\right) = \sum_{i=1}^{\infty} P(A_i)}

✅ Summary of Axioms

AxiomNameMeaning
1. P(A)0P(A) \geq 0
    Positiveness    Probability is never negative.
2. P(S)=1P(S) = 1
    Certainty    The entire sample space always has probability 1.
3. Additivity         Countable Additivity    Probabilities of disjoint events add up correctly.


From Frequency to Axioms of Probability

Let’s assume a random experiment repeated NN times, with:

  • NAN_A: number of times event AA occurs

  • Then, the probability of event A is:

P(A)=NAN\boxed{P(A) = \frac{N_A}{N}}

Now, let’s see how this definition satisfies the three axioms of probability.


✅ Axiom 1: Non-negativity

Since NA0N_A \geq 0, we get:

P(A)=NAN0\boxed{P(A) = \frac{N_A}{N} \geq 0}

✅ So the probability is never negative.


✅ Axiom 2: Normalization

The sample space SS includes all possible outcomes. So the number of outcomes in S is:

NS=NP(S)=NN=1N_S = N \Rightarrow P(S) = \frac{N}{N} = \boxed{1}

✅ The total probability of the sample space is 1.


✅ Axiom 3: Additivity (for disjoint events)

Let AA and BB be mutually exclusive (disjoint) events:

AB=A \cap B = \emptyset

Let:

  • NAN_A = number of times AA occurs

  • NBN_B = number of times BB occurs

  • NAB=NA+NBN_{A \cup B} = N_A + N_B

Then:

P(AB)=NA+NBN=NAN+NBN=P(A)+P(B)P(A \cup B) = \frac{N_A + N_B}{N} = \frac{N_A}{N} + \frac{N_B}{N} = \boxed{P(A) + P(B)}

✅ So, probabilities of disjoint events add.


📘 Extended Axiom of Addition (Countable Additivity)

If an event AA can happen by the occurrence of any one of countably many mutually disjoint events A1,A2,A3,A_1, A_2, A_3, \dots i.e.,

A=i=1Aiwith AiAj= for ijA = \bigcup_{i=1}^{\infty} A_i \quad \text{with } A_i \cap A_j = \emptyset \text{ for } i \neq j

Then,

P(A)=i=1P(Ai)\boxed{P(A) = \sum_{i=1}^{\infty} P(A_i)}

This is the generalization of Axiom 3 to infinitely many disjoint events, and is a foundation of modern probability theory.

THEOREMS ON PROBABILITIES OF EVENTS

 Theorem:Probability of the Impossible Event is Zero

🔷 Statement:

P()=0
\boxed{P(\emptyset) = 0}

Theorem: Probability of the Complementary Event

🔷 Statement:

If A is an event in the sample space S, and A(or A\overline{A}) denotes the complement of A, then:

P(A)=1P(A)\boxed{P(A') = 1 - P(A)}

Or equivalently:

P(A)=1P(A)
\boxed{P(A) = 1 - P(A')}

Comments

Popular posts from this blog

GNEST305 Introduction to Artificial Intelligence and Data Science KTU BTech S3 2024 Scheme - Dr Binu V P

Basics of Machine Learning

Types of Machine Learning Systems