High Bias → model too simple
-
High Variance → model too complex
Now let’s directly connect this to:
-
Underfitting
-
Overfitting
π΄ 1️⃣ High Bias = Underfitting
Let’s use your example:
Model says:
“Every house costs £200,000.”
It ignores:
-
Size
-
Location
-
Property type
-
Market trends
It doesn’t even try to learn patterns.
What happens?
Big London house (£800k) → predicts £200k ❌
Small Coventry flat (£120k) → predicts £200k ❌
Medium house (£350k) → predicts £200k ❌
It is wrong everywhere.
But notice something important:
π It is wrong in a stable and consistent way.
It does not change behavior.
It is just too simple.
That is Underfitting.
Underfitting means:
Model fails to capture the real pattern in training data.
Even on training data, it performs badly.
Training error = high
Test error = high
Why is this High Bias?
Because:
The model assumes something too simple:
“Price is constant.”
That assumption is wrong.
So bias = strong wrong assumption.
Lazy Student Analogy
Teacher asks:
-
5 + 3 = ?
-
10 + 4 = ?
-
7 × 6 = ?
Student answers:
“42” every time.
He didn’t even try to understand.
That’s underfitting.
π΅ 2️⃣ High Variance = Overfitting
Now second case.
Model memorizes every training house exactly.
If training data says:
House A → £350,000
House B → £420,000
Model stores exact mapping.
Training performance:
Perfect.
Now new house appears:
Slightly different.
Model panics.
Prediction is very wrong.
Why?
Because it didn’t learn pattern.
It memorized noise.
That is Overfitting.
Overfitting means:
Model performs extremely well on training data
But poorly on new unseen data.
Training error = very low
Test error = high
Why is this High Variance?
Because:
If you slightly change training data,
model changes a lot.
It is unstable.
Small data change → big prediction change.
Over-Studying Student Analogy
Student memorizes last year’s exam paper exactly.
New exam is slightly different.
Student fails badly.
That is overfitting.
π₯ Now Compare Both Clearly
| Situation | Bias | Variance | Fit Type |
|---|---|---|---|
| Too simple model | High | Low | Underfitting |
| Too complex model | Low | High | Overfitting |
π§ What Is Really Happening?
Underfitting (High Bias)
Model capacity is too low.
It cannot represent real function.
Like drawing straight line for curved data.
Overfitting (High Variance)
Model capacity is too high.
It fits noise, not pattern.
Like drawing extremely wiggly curve touching every point.
π― Final Simple Memory Rule
Underfitting:
Model is blind.
Overfitting:
Model is paranoid.
Balanced model:
Model is intelligent.
π Real ML Perspective (Important)
When training model, you check:
If:
Train error high
Test error high
π Underfitting
If:
Train error low
Test error high
π Overfitting
If:
Train error low
Test error low
π Good model
π Final Clear Definitions
Underfitting:
Model too simple to learn real pattern.
Overfitting:
Model too complex and learns noise instead of pattern.