Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
AI-trends.todayAI-trends.today
Home»Tech»This guide will help you implement advanced hyperparameter optimization with Optuna by using early stopping, multi-objective search, and deep visual analysis.

This guide will help you implement advanced hyperparameter optimization with Optuna by using early stopping, multi-objective search, and deep visual analysis.

Tech By Gavin Wallace18/11/20255 Mins Read
Facebook Twitter LinkedIn Email
A Coding Implementation to Build an Interactive Transcript and PDF
A Coding Implementation to Build an Interactive Transcript and PDF
Share
Facebook Twitter LinkedIn Email

This tutorial demonstrates an advanced implementation of the XML-based Object Relational Mapper. Optuna Workflow that explores multi-objective, callbacks and rich visualisation. Optuna allows us to shape better search spaces, improve experimentation, and get insights into model improvements. Working with real datasets and designing efficient search strategy, we analyze the trial behaviour in an intuitive, interactive and fast way. See the FULL CODES here.

Import Opuna
MedianPruner imported from Opuna.pruners
Import TPESampler from optuna.samplers
Numpy can be imported as np
From sklearn.datasets, import load_breast_cancer and load_diabetes
from sklearn.model_selection import cross_val_score, KFold
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
Import matplotlib.pyplot into pltparams = ""


def objective_with_pruning(trial):
   X, y = load_breast_cancer(return_X_y=True)
   params = {
       'n_estimators': trial.suggest_int('n_estimators', 50, 200),
       'min_samples_split': trial.suggest_int('min_samples_split', 2, 20),
       'min_samples_leaf': trial.suggest_int('min_samples_leaf', 1, 10),
       'subsample': trial.suggest_float('subsample', 0.6, 1.0),
       'max_features': trial.suggest_categorical('max_features', ['sqrt', 'log2', None]),
   }
   model = GradientBoostingClassifier(**params, random_state=42)
   kf = KFold(n_splits=3, shuffle=True, random_state=42)
   scores = []
   for fold, (train_idx, val_idx) in enumerate(kf.split(X)):
 X_train and X_val are equal to X[train_idx]X[val_idx]
 y_train = y_val[train_idx]The Y[val_idx]
       model.fit(X_train, y_train)
 Score = model.score (X_val,y_val).
       scores.append(score)
       trial.report(np.mean(scores), fold)
 If trial.should_prune():
           raise optuna.TrialPruned()
   return np.mean(scores)


study1 = optuna.create_study(
   direction='maximize',
   sampler=TPESampler(seed=42),
   pruner=MedianPruner(n_startup_trials=5, n_warmup_steps=1)
)
study1.optimize(objective_with_pruning, n_trials=30, show_progress_bar=True)


print(study1.best_value, study1.best_params)

We define the first objective function using pruning. Optuna is actively pruning weaker tests and guiding us towards stronger hyperparameter zones as we run Gradient Boosting. The optimization becomes faster as we progress. See the FULL CODES here.

def multi_objective(trial):
   X, y = load_breast_cancer(return_X_y=True)
   n_estimators = trial.suggest_int('n_estimators', 10, 200)
   max_depth = trial.suggest_int('max_depth', 2, 20)
   min_samples_split = trial.suggest_int('min_samples_split', 2, 20)
   model = RandomForestClassifier(
       n_estimators=n_estimators,
       max_depth=max_depth,
       min_samples_split=min_samples_split,
       random_state=42,
       n_jobs=-1
   )
   accuracy = cross_val_score(model, X, y, cv=3, scoring='accuracy', n_jobs=-1).mean()
   complexity = n_estimators * max_depth
 Return accuracy and complexity


study2 = optuna.create_study(
   directions=['maximize', 'minimize'],
   sampler=TPESampler(seed=42)
)
study2.optimize(multi_objective, n_trials=50, show_progress_bar=True)


For t, see study2.best_trials[:3]:
   print(t.number, t.values)

In a multiobjective setting, we can optimize accuracy as well as model complexity. Optuna builds an automatic Pareto-front as we experiment with different configurations. This allows us to compare the tradeoffs and not just chase a score. It helps us to understand the interaction between metrics. See the FULL CODES here.

class EarlyStoppingCallback:
   def __init__(self, early_stopping_rounds=10, direction='maximize'):
       self.early_stopping_rounds = early_stopping_rounds
 Self-direction is direction
       self.best_value = float('-inf') if direction == 'maximize' else float('inf')
       self.counter = 0
   def __call__(self, study, trial):
       if trial.state != optuna.trial.TrialState.COMPLETE:
 You can return to your original language by clicking here.
       v = trial.value
 If self.direction=='maximize:
           if v > self.best_value:
               self.best_value, self.counter = v, 0
           else:
               self.counter += 1
       else:
           if v = self.early_stopping_rounds:
           study.stop()


def objective_regression(trial):
   X, y = load_diabetes(return_X_y=True)
   alpha = trial.suggest_float('alpha', 1e-3, 10.0, log=True)
   max_iter = trial.suggest_int('max_iter', 100, 2000)
 Ridge imports sklearn.linear_model
   model = Ridge(alpha=alpha, max_iter=max_iter, random_state=42)
   score = cross_val_score(model, X, y, cv=5, scoring='neg_mean_squared_error', n_jobs=-1).mean()
 Return -score


early_stopping = EarlyStoppingCallback(early_stopping_rounds=15, direction='minimize')
study3 = optuna.create_study(direction='minimize', sampler=TPESampler(seed=42))
study3.optimize(objective_regression, n_trials=100, callbacks=[early_stopping], show_progress_bar=True)


print(study3.best_value, study3.best_params)

Introduce our own callback for early stopping and link it with a regression goal. The study automatically stops when the progress slows down, saving both time and computation. It’s amazing how Optuna can be customized to fit real-world learning behavior. See the FULL CODES here.

Axes = plt.subplots (2, 2, figsize=(14.10, 10)).


Ax = axes[0, 0]
Values = [t.value for t in study1.trials if t.value is not None]
ax.plot(values, marker="o", markersize=3)
ax.axhline(y=study1.best_value, color="r", linestyle="--")
ax.set_title('Study 1 History')


Axes = axes[0, 1]
importance = optuna.importance.get_param_importances(study1)
Params = List(importance.keys())[:5]
vals = [importance[p] For p, use params
ax.barh(params, vals)
ax.set_title('Param Importance')


Axes = axes[1, 0]
Study2 trials:
 If T.Values
       ax.scatter(t.values[0], t.values[1], alpha=0.3)
Study2: best_trials
   ax.scatter(t.values[0], t.values[1], c="red", s=90)
ax.set_title('Pareto Front')


ax = Axes[1, 1]
pairs = [(t.params.get('max_depth', 0), t.value) for t in study1.trials if t.value]
Xv, Yv = zip(*pairs) if pairs else ([], [])
ax.scatter(Xv, Yv, alpha=0.6)
ax.set_title('max_depth vs Accuracy')


plt.tight_layout()
plt.savefig('optuna_analysis.png', dpi=150)
plt.show()

Visualize everything you have done so far. The optimization curves we generate, the parameter importances that are displayed, Pareto Fronts and parameter-metric relationship help us to interpret our entire experiment in a single glance. The plots help us understand where and why the model is performing best. Look at the FULL CODES here.

p1 = len([t for t in study1.trials if t.state == optuna.trial.TrialState.PRUNED])
print("Study 1 Best Accuracy:", study1.best_value)
print("Study 1 Pruned %:", p1 / len(study1.trials) * 100)


print("Study 2 Pareto Solutions:", len(study2.best_trials))


print("Study 3 Best MSE:", study3.best_value)
print("Study 3 Trials:", len(study3.trials))

The key findings from the three studies are summarized, including accuracy, pruning effectiveness, Pareto solution, and regression MSE. The fact that everything is condensed in just a few words gives us an idea of where we are on our optimization journey. This setup is now ready to be extended and adapted for further experiments.

We have learned how to create powerful hyperparameter pipelines, which go beyond the simple tuning of a single metric. Our workflow combines pruning, Pareto Optimization, Early Stopping, and Analysis Tools to create a flexible and complete workflow. This template is now a great tool that we can adapt to any new project. ML If it’s a DL model, we can optimize that knowing now we have a blueprint to follow for high quality Optuna experiments.


Click here to find out more FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to leveraging the power of Artificial Intelligence (AI) for the social good. Marktechpost is his latest venture, a media platform that focuses on Artificial Intelligence. It is known for providing in-depth news coverage about machine learning, deep learning, and other topics. The content is technically accurate and easy to understand by an audience of all backgrounds. Over 2 million views per month are a testament to the platform’s popularity.

🙌 Follow MARKTECHPOST: Add us as a preferred source on Google.

ar met search van
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026
Top News

Siri Must Die

OpenAI’s new CEO for Applications strikes a hyper-optimistic tone in his first memo to staff

Nvidia’s Deal With Meta signals a new era in computing power

Google Is Adding an ‘AI Inbox’ to Gmail That Summarizes Emails

Ai Agent is Ready for Service, While on the Phone

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

The UltraCUA Model: A Computer Use Agents Foundation that bridges the gap between General Purpose GUI Agents, and Specialized API Agents

23/10/2025

It’s all over with the fake AI about the Iran war.

10/03/2026
Latest News

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.