Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval
  • Your Favorite AI Gay Thirst Traps: The Men Behind them
  • Mend Releases AI Safety Governance Framework: Masking Asset Stock, Danger Tiering, AI Provide Chain Safety, and Maturity Mannequin
  • Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Coaching Structure Attaining 88% Goodput Below Excessive {Hardware} Failure Charges
  • Mend.io releases AI Security Governance Framework covering asset inventory, risk tiering, AI Supply Chain Security and Maturity model
  • Stanford Students Wait in Line to Hear From Silicon Valley Royalty at ‘AI Coachella’
  • Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.
  • Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.
AI-trends.todayAI-trends.today
Home»Tech»The GluonTS Multi-Model Workflow Guide: Synthetic Data, Advanced Visualizations, and Evaluation.

The GluonTS Multi-Model Workflow Guide: Synthetic Data, Advanced Visualizations, and Evaluation.

Tech By Gavin Wallace24/08/20256 Mins Read
Facebook Twitter LinkedIn Email
NVIDIA Introduces ProRL: Long-Horizon Reinforcement Learning Boosts Reasoning and Generalization
NVIDIA Introduces ProRL: Long-Horizon Reinforcement Learning Boosts Reasoning and Generalization
Share
Facebook Twitter LinkedIn Email

We explore this topic in detail. GluonTS We will look at the practical side of things, where we create complex synthetic data, prepare it, and use multiple models concurrently. In this paper, we focus on the use of multiple estimators, handling missing dependencies, and producing usable results. The evaluation and visualization step is built into the workflow to show how models are trained, compared and then interpreted as a seamless, single process. Visit the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks.

Import numpy as an np
import pandas as pd
Import matplotlib.pyplot into plt
Datetime can be imported as timedelta
import warnings
warnings.filterwarnings('ignore')


from gluonts.dataset.pandas import PandasDataset
from gluonts.dataset.split import split
from gluonts.evaluation import make_evaluation_predictions, Evaluator
from gluonts.dataset.artificial import ComplexSeasonalTimeSeries


try:
 DeepAREstimator can be imported from gluonts.torch
 Torch_Available = True
If you get an ImportError, it's because your import is not working.
 TORCH_AVAILABLE=False


try:
 From gluonts.mx, import MXDeepAREstimator
   from gluonts.mx import SimpleFeedForwardEstimator
 Mx_AVAILABLE = Fal
If you get an ImportError, it's because your import is not working.
 MX_AVAILABLE is False

Importing the GluonTS tools and libraries is our first step. The conditional imports of PyTorch estimators and MXNet estimators allow us to use the backend that is most appropriate for our environment. Visit the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks.

def create_synthetic_dataset(num_series=50, length=365, prediction_length=30):
   """Generate synthetic multi-variate time series with trends, seasonality, and noise"""
   np.random.seed(42)
   series_list = []
  
   for i in range(num_series):
       trend = np.cumsum(np.random.normal(0.1 + i*0.01, 0.1, length))
      
       daily_season = 10 * np.sin(2 * np.pi * np.arange(length) / 7) 
       yearly_season = 20 * np.sin(2 * np.pi * np.arange(length) / 365.25) 
      
       noise = np.random.normal(0, 5, length)
 Values = np.maximum (trend + daily_season plus yearly_season noise + 100)
      
 Dates = range(start=)"2020-01-01", periods=length, freq='D')
      
       series_list.append(pd.Series(values, index=dates, name=f'series_{i}'))
  
   return pd.concat(series_list, axis=1)

Create a dataset that combines seasonality, trend and noise. Every run is designed to produce consistent results. We return a multi-series DataFrame that’s ready for experiments. Visit the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks.

print("🚀 Creating synthetic multi-series dataset...")
df = create_synthetic_dataset(num_series=10, length=200, prediction_length=30)


dataset = PandasDataset(df, target=df.columns.tolist())


training_data, test_gen = split(dataset, offset=-60)
test_data = test_gen.generate_instances(prediction_length=30, windows=2)


print("🔧 Initializing forecasting models...")


models = {}


If TORCH_AVAILABLE
   try:
       models['DeepAR_Torch'] = DeepAREstimator(
           freq='D',
           prediction_length=30
       )
       print("✅ PyTorch DeepAR loaded")
 Except Exception As e.
       print(f"❌ PyTorch DeepAR failed to load: {e}")


If MX_AVAILABLE
   try:
       models['DeepAR_MX'] = MXDeepAREstimator(
           freq='D',
           prediction_length=30,
           trainer=dict(epochs=5)
       )
       print("✅ MXNet DeepAR loaded")
 except as follows:
       print(f"❌ MXNet DeepAR failed to load: {e}")
  
   try:
       models['FeedForward'] = SimpleFeedForwardEstimator(
           freq='D',
           prediction_length=30,
           trainer=dict(epochs=5)
       )
       print("✅ FeedForward model loaded")
 Except Exception As e.
       print(f"❌ FeedForward failed to load: {e}")


If not, models
   print("🔄 Using artificial dataset with built-in models...")
   artificial_ds = ComplexSeasonalTimeSeries(
       num_series=10,
       prediction_length=30,
       freq='D',
       length_low=150,
       length_high=200
   ).generate()
  
   training_data, test_gen = split(artificial_ds, offset=-60)
   test_data = test_gen.generate_instances(prediction_length=30, windows=2)

We create a 10-series data set, wrap it with GluonTS PandasDatasets, then split it between training and testing windows. When available, we initialize several estimators, including PyTorch DeepAR and MXNet DeepAR. If none of these backends are loaded, then we fall back on a pre-built artificial dataset. Visit the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks.

trained_models = {}
all_forecasts = {}


if models:
 Name estimator for models.items():
       print(f"🎯 Training {name} model...")
       try:
           predictor = estimator.train(training_data)
           trained_models[name] = predictor
          
           forecasts = list(predictor.predict(test_data.input))
           all_forecasts[name] Forecasts
           print(f"✅ {name} training completed!")
          
 Except as follows:
           print(f"❌ {name} training failed: {e}")
 You can continue reading


print("📊 Evaluating model performance...")
Quantiles = Evaluator[0.1, 0.5, 0.9])
evaluation_results = {}


Name the forecasts and they will appear in all_forecasts.items():
 Forecasts can be used to: 
       try:
           agg_metrics, item_metrics = evaluator(test_data.label, forecasts)
           evaluation_results[name] = agg_metrics
           print(f"n{name} Performance:")
           print(f"  MASE: {agg_metrics['MASE']:.4f}")
           print(f"  sMAPE: {agg_metrics['sMAPE']:.4f}")
           print(f"  Mean wQuantileLoss: {agg_metrics['mean_wQuantileLoss']:.4f}")
 An exception to the rule is:
           print(f"❌ Evaluation failed for {name}: {e}")

Each estimator is trained, the forecasts are collected, and we store them to reuse. Then we evaluate the results using MASE, weighted quantitative loss and sMAPE. This gives us a comparable view of model performance. Visit the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks.

def plot_advanced_forecasts(test_data, forecasts_dict, series_idx=0):
   """Advanced plotting with multiple models and uncertainty bands"""
 Figure, Axes= plt.subplots(2), (2), (figsize=(15.10), (10.15)).
   fig.suptitle('Advanced GluonTS Forecasting Results', fontsize=16, fontweight="bold")
  
 if forecasts_dict
 No successful forecasts will be displayed.
               ha="center", va="center", fontsize=20)
 Return Fig
  
 If series_idx is 1, else (x).
           ax4.set_xticklabels(metrics)
           ax4.legend()
           ax4.grid(True, alpha=0.3)
       else:
           ax4.text(0.5, 0.5, 'No evaluationnresults available',
                   ha="center", va="center", transform=ax4.transAxes, fontsize=14)
  
   plt.tight_layout()
 Return to Fig


If all_forecasts.label and test_data.label are selected:
   print("📈 Creating advanced visualizations...")
   fig = plot_advanced_forecasts(test_data, all_forecasts, series_idx=0)
   plt.show()
  
   print(f"n🎉 Tutorial completed successfully!")
   print(f"📊 Trained {len(trained_models)} model(s) on {len(df.columns) if 'df' in locals() else 10} time series")
   print(f"🎯 Prediction length: 30 days")
  
 If evaluation_results
       best_model = min(evaluation_results.items()Key=lambda: x[1]['MASE'])
       print(f"🏆 Best performing model: {best_model[0]} (MASE: {best_model[1]['MASE']:.4f})")
  
   print(f"n🔧 Environment Status:")
   print(f"  PyTorch Support: {'✅' if TORCH_AVAILABLE else '❌'}")
   print(f"  MXNet Support: {'✅' if MX_AVAILABLE else '❌'}")
  
else:
   print("⚠️  Creating demonstration plot with synthetic data...")
  
 FIGURE ax = plt.subplots (1, 1, figsize=12, 6)
  
   dates = pd.date_range('2020-01-01', periods=100, freq='D')
   ts = 100 + np.cumsum(np.random.normal(0, 2, 100)) + 20 * np.sin(np.arange(100) * 2 * np.pi / 30)
  
   ax.plot(dates[:70]Ts[:70], 'b-', label="Historical Data", linewidth=2)
   ax.plot(dates[70:]The ts[70:], 'r--', label="Future (Example)", linewidth=2)
   ax.fill_between(dates[70:]The ts[70:] The ts[70:] + 5, alpha=0.3, color="red")
  
   ax.set_title('GluonTS Probabilistic Forecasting Example', fontsize=14, fontweight="bold")
   ax.set_xlabel('Date')
   ax.set_ylabel('Value')
   ax.legend()
   ax.grid(True, alpha=0.3)
  
   plt.tight_layout()
   plt.show()
  
   print("n📚 Tutorial demonstrates advanced GluonTS concepts:")
   print("  • Multi-series dataset generation")
   print("  • Probabilistic forecasting")
   print("  • Model evaluation and comparison")
   print("  • Advanced visualization techniques")
   print("  • Robust error handling")

We then train and generate each model available, evaluate it using consistent metrics, before visualizing the comparisons and residuals. Even if there are no available models, the workflow is still demonstrated with a composite example. This allows us to inspect all plots as well as key concepts.

We have put together an effective setup which balances the creation of data, experimentation with models, and analysis of performance. We see that we can adapt to multiple configurations, experiment with different options and visually represent the results so they are easy to compare. We can now experiment with GluonTS, and apply the same principles on real datasets.


Take a look at the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe Now our Newsletter.


Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence to benefit society. Marktechpost was his most recent venture. This platform, a Media Platform for Artificial Intelligence, is well known for the in-depth reporting of news on machine learning and deep understanding that is both technical and understandable to a broad audience. This platform has over 2,000,000 monthly views which shows its popularity.

dat data synthetic data van
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval

24/04/2026

Mend Releases AI Safety Governance Framework: Masking Asset Stock, Danger Tiering, AI Provide Chain Safety, and Maturity Mannequin

24/04/2026

Google DeepMind Introduces Decoupled DiLoCo: An Asynchronous Coaching Structure Attaining 88% Goodput Below Excessive {Hardware} Failure Charges

24/04/2026

Mend.io releases AI Security Governance Framework covering asset inventory, risk tiering, AI Supply Chain Security and Maturity model

23/04/2026
Top News

The cyberattack that left drivers stuck on the road was a result of a cyber-attack by a company selling car breathalyzers

New York Companies have not admitted replacing their workers with AI

Elon Musk’s IQ and the Nature of Genius • AI Blog

China unveiled the first AI Hospital in the world: Fourteen virtual doctors are ready to treat Thousands of Patients Daily

Thinking Machines Lab announces cofounders after raising a record $2 billion

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

AI Interview #5: Prompt Caching

05/01/2026

Anthropic Sues Department of Defense for Supply Chain-Risk Determination

09/03/2026
Latest News

OpenAI Releases GPT-5.5, a Absolutely Retrained Agentic Mannequin That Scores 82.7% on Terminal-Bench 2.0 and 84.9% on GDPval

24/04/2026

Your Favorite AI Gay Thirst Traps: The Men Behind them

24/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.