Concept Core
From classical probability to Bayes' theorem — the complete framework.
Classical Probability & Fundamental Rules
Classical definition: P(A) = (favourable outcomes)/(total equally likely outcomes)
0 ≤ P(A) ≤ 1 P(A) + P(A') = 1
P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
Mutually exclusive events: A ∩ B = ∅ → P(A ∪ B) = P(A) + P(B)
Independent events: P(A ∩ B) = P(A) × P(B)
Conditional Probability
P(A|B) = P(A ∩ B) / P(B) (B has occurred)
P(A|B) ≠ P(B|A) in general. For independent events: P(A|B) = P(A) — B's occurrence doesn't affect A.
Multiplication rule: P(A ∩ B) = P(A) × P(B|A) = P(B) × P(A|B)
Total Probability Theorem
If B₁, B₂, ..., Bₙ are mutually exclusive and exhaustive events:
P(A) = Σ P(Bᵢ) × P(A|Bᵢ)
Used when the sample space is partitioned and we know conditional probabilities in each partition.
Bayes' Theorem
Given event A has occurred, find which partition Bₖ caused it:
P(Bₖ|A) = P(Bₖ) × P(A|Bₖ) / Σ P(Bᵢ) × P(A|Bᵢ)
The numerator is the individual term; denominator is P(A) from total probability theorem.
Binomial Distribution
For n independent trials, each with probability p of success:
P(X = r) = ⁿCᵣ pʳ (1−p)ⁿ⁻ʳ = ⁿCᵣ pʳ qⁿ⁻ʳ
Mean = np Variance = npq SD = √(npq)
q = 1 − p. Sum of all P(X=r) = 1. Symmetric when p = q = ½.
Geometric & Other Distributions
Geometric distribution: P(first success on rth trial) = q^(r-1) × p
Random variable expectation: E(X) = Σ xᵢ P(xᵢ). For functions: E(aX+b) = aE(X) + b
Variance: Var(X) = E(X²) − [E(X)]². Always ≥ 0.
Formula Vault
All probability formulas — classical, conditional, Bayes', and distributions.
Classical Probability
P(A) = favourable/total
Equally likely outcomes
Complement
P(A') = 1 − P(A)
P(A) + P(A') = 1 always
Addition Rule
P(A∪B) = P(A)+P(B)−P(A∩B)
Subtract intersection to avoid double-counting
Independent Events
P(A∩B) = P(A) × P(B)
Also: P(A|B) = P(A)
Conditional Probability
P(A|B) = P(A∩B)/P(B)
Probability of A given B occurred
Multiplication Rule
P(A∩B) = P(A)·P(B|A)
Also = P(B)·P(A|B)
Total Probability
P(A) = Σ P(Bᵢ)P(A|Bᵢ)
Bᵢ exhaustive & exclusive
Bayes' Theorem
P(Bₖ|A) = P(Bₖ)P(A|Bₖ)/P(A)
P(A) from total probability
Binomial P(X=r)
ⁿCᵣ pʳ qⁿ⁻ʳ
q = 1−p; n trials; p = success prob
Binomial Mean/Var
μ = np; σ² = npq
σ (SD) = √(npq)
Worked Examples
5 problems — classical probability to Bayes' to binomial.
EasyTwo dice rolled — find P(sum = 7)▾
Two fair dice are rolled. Find the probability that the sum is 7.
1
Total outcomes = 6 × 6 = 36
2
Favourable: (1,6),(2,5),(3,4),(4,3),(5,2),(6,1) = 6 outcomes
✓ P(sum = 7) = 1/6
EasyFind P(A∪B) given P(A)=0.4, P(B)=0.3, P(A∩B)=0.1▾
Find P(A ∪ B) if P(A) = 0.4, P(B) = 0.3, P(A ∩ B) = 0.1.
1
P(A∪B) = P(A) + P(B) − P(A∩B) = 0.4 + 0.3 − 0.1 = 0.6
✓ P(A ∪ B) = 0.6
MediumBag has 3 red, 4 blue balls. Two drawn without replacement. P(both red)?▾
A bag has 3 red and 4 blue balls. Two balls are drawn without replacement. Find P(both red).
2
P(2nd red | 1st red) = 2/6 = 1/3
3
P(both red) = (3/7) × (1/3) = 3/21 = 1/7
✓ P(both red) = 1/7
EAPCET LevelBayes' Theorem: factory output from two machines▾
Machine A makes 60% of output with 2% defects; Machine B makes 40% with 5% defects. A defective item is found — what's the probability it came from Machine A?
2
P(defect|A) = 0.02, P(defect|B) = 0.05
3
P(defect) = 0.6×0.02 + 0.4×0.05 = 0.012 + 0.020 = 0.032
4
P(A|defect) = 0.6×0.02/0.032 = 0.012/0.032 = 3/8 = 0.375
✓ P(Machine A | defective) = 3/8 = 37.5%
Trap QuestionA fair coin tossed 5 times — P(at least one head)?▾
Find P(at least one head) when a fair coin is tossed 5 times. ⚠️ Students compute directly instead of using complement.
1
Direct approach: P(exactly 1H) + P(2H) + ... + P(5H) = 5 separate calculations. Very slow.
2
Complement approach: P(at least 1H) = 1 − P(no heads) = 1 − P(all tails)
3
P(all tails) = (1/2)⁵ = 1/32
4
P(at least 1 head) = 1 − 1/32 = 31/32
✓ P(at least one head) = 31/32 (use complement: 1 − P(none))
Mistake DNA
4 probability errors from EAPCET distractor analysis.
➕
P(A∪B) = P(A) + P(B) Without Subtracting P(A∩B)
The addition rule requires subtracting the intersection to avoid double-counting overlapping outcomes.
❌ Wrong
P(A∪B) = 0.4 + 0.3 = 0.7 ✗
(if P(A∩B) = 0.1,
this overcounts)
✓ Correct
P(A∪B) = P(A)+P(B)−P(A∩B)
= 0.4+0.3−0.1 = 0.6 ✓
Subtract intersection
Only if A and B are mutually exclusive (P(A∩B)=0) does P(A∪B) = P(A)+P(B). Always check mutual exclusivity.
🔄
Confusing Independent Events with Mutually Exclusive Events
Mutually exclusive: P(A∩B) = 0 (can't both happen). Independent: P(A∩B) = P(A)×P(B) (occurrence of one doesn't affect the other).
❌ Wrong
'A and B are mutually
exclusive → they are
independent' ✗
(nearly the opposite!)
✓ Correct
Mutually exclusive: A∩B = ∅ ✓
Independent: P(A∩B)=P(A)P(B) ✓
If P(A),P(B)>0: these
conditions can't both hold
If A and B are mutually exclusive with P(A)>0 and P(B)>0, then P(A∩B)=0 ≠ P(A)P(B) → they are NOT independent.
🎲
Not Using the Complement Method for 'At Least One'
'At least one' problems solved directly require many cases. Using complement (1 − P(none)) is always faster.
❌ Wrong
P(at least 1 six in 3 rolls):
P(1)+P(2)+P(3 sixes)
= 3 separate calculations ✗
✓ Correct
1 − P(no sixes)
= 1 − (5/6)³
= 1 − 125/216 = 91/216 ✓
One step, no cases
Whenever a question has 'at least one', 'at least once', or 'more than zero': use complement P = 1 − P(none/zero).
📊
Binomial Distribution: Wrong q Value
q must be 1 − p. If p = 0.3, then q = 0.7. Students sometimes use q = p or forget it entirely.
❌ Wrong
P(X=2) with p=0.3, n=5:
⁵C₂ (0.3)² (0.3)³ ✗
(used p instead of q)
✓ Correct
q = 1 − 0.3 = 0.7 ✓
P(X=2) = ⁵C₂(0.3)²(0.7)³
= 10×0.09×0.343
= 0.3087 ✓
In binomial P(X=r) = ⁿCᵣ pʳ qⁿ⁻ʳ: p = probability of success, q = 1−p = probability of failure. The exponents sum to n: r + (n−r) = n. ✓
Chapter Intelligence
Probability builds on P&C and leads to Statistics — master the fundamentals first.
EAPCET Weightage (2019–2024)
Conditional probability~7 Total probability theorem~3
High-Yield PYQ Patterns
P(A∪B) using addition ruleConditional P(A|B) calculationBayes' theorem 2-machine problemBinomial mean and varianceP(at least one) using complementIndependent event identificationRandom variable expectation E(X)
Exam Strategy
- 'At least one' → always use complement: P = 1 − P(none). This is faster for every such problem.
- Bayes' theorem: set up a table with prior probabilities P(Bᵢ) and likelihoods P(A|Bᵢ). Compute the joint P(Bᵢ ∩ A) = P(Bᵢ)×P(A|Bᵢ), then divide by their total.
- Binomial questions: identify n (trials), p (success probability per trial), and required r (number of successes). Apply P(X=r) = ⁿCᵣ pʳ qⁿ⁻ʳ directly.
- Independent vs mutually exclusive: if A and B are non-empty and mutually exclusive, they cannot be independent (knowing one gives info about the other).