Hyperparameter Tuning with GridSearchCV
What hyperparameters are
Hyperparameters are settings you choose before training.
Examples:
- max_depth in a tree
- C in SVM
- number of neighbors in KNN
They affect bias/variance strongly.
Grid search idea
Try all combinations in a parameter grid.
false
flowchart LR G[Parameter grid] --> CV[Cross-validation] CV --> S[Score combinations] S --> B[Best hyperparameters]
false
Scikit-learn example
GridSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
params = {
"C": [0.1, 1, 10],
"gamma": ["scale", 0.01, 0.1],
"kernel": ["rbf"],
}
grid = GridSearchCV(
estimator=SVC(),
param_grid=params,
cv=5,
scoring="accuracy",
n_jobs=-1,
)
grid.fit(X, y)
print("best params:", grid.best_params_)
print("best score:", grid.best_score_)GridSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
params = {
"C": [0.1, 1, 10],
"gamma": ["scale", 0.01, 0.1],
"kernel": ["rbf"],
}
grid = GridSearchCV(
estimator=SVC(),
param_grid=params,
cv=5,
scoring="accuracy",
n_jobs=-1,
)
grid.fit(X, y)
print("best params:", grid.best_params_)
print("best score:", grid.best_score_)Limitations
Grid search becomes expensive when:
- you have many parameters
- each parameter has many values
Thatβs where RandomizedSearchCV helps.
Mini-checkpoint
If you have 5 parameters with 10 options each:
- grid size = 10^5 = 100,000 runs (too much).
π§ͺ Try It Yourself
Exercise 1 β Train-Test Split
Exercise 2 β Fit a Linear Model
Exercise 3 β Evaluate with MSE
If this helped you, consider buying me a coffee β
Buy me a coffeeWas this page helpful?
Let us know how we did
