This article outlines the various options available when configuring Predictions.
AutoML
AutoML is a feature designed to help you find the most accurate forecast for your data. With just one click, AutoML tests multiple forecasting models on your time series and automatically selects the one that performs best.
For each time series, AutoML generates predictions using several models, including:
AutoETS
Chronos-Bolt
Linear Regression
Moving Average
Prophet
Seasonal Differencing
It then evaluates the accuracy of each model using MASE (for more information, see its Wikipedia page), a robust metric that compares prediction errors to a simple baseline (Seasonal Naïve forecast). Accuracy is measured through backtesting on the last 20% of your historical data, ensuring that the selected model is validated on recent trends.
By default, we recommend using AutoML. It offers the simplest and most reliable way to select the best forecast automatically, without requiring technical expertise.
ℹ️ Note
AutoML takes longer to compute, as it needs to run and evaluate multiple models for each time series to select the most accurate model.
AutoETS
AutoETS is an open-source model based on the same logic as the native FORECAST_ETS function. It automatically selects the best-fitting ETS model for your data. It optimizes the smoothing parameters—alpha (level), beta (trend), and gamma (seasonality)—to minimize forecasting error, without requiring any manual tuning. For more information, see the official documentation.
🎓 Pigment Academy
For an overview of forecasting with AutoETS, watch this Pigment Academy film.
Chronos-Bolt
Chronos-Bolt is an open-source forecasting model developed by AWS AI Labs, designed to handle a wide variety of forecasting scenarios, including those with limited historical data, multiple seasonalities, and irregular patterns.
It leverages foundation model architectures and pre-training on nearly 100 billion time series observations, which helps it generalize well even when your data is sparse or noisy. Chronos-Bolt is especially effective for short-term forecasting horizons.
In Pigment, we support the Chronos-Bolt base model in zero-shot mode, meaning it is applied directly without any fine-tuning on your data. This makes it simple to use while still benefiting from the model’s strong generalization capabilities.
For more information, see the official Amazon documentation and this article.
Prophet
Prophet is an open-source forecasting model developed by Meta, designed to handle seasonality, trends, and holiday effects. For more information, see the official Meta documentation. Prophet handles a wide range of seasonal patterns and trend shifts effectively, making it a strong choice for most business forecasting scenarios.
🎓 Pigment Academy
For an overview of forecasting with Prophet, watch this Pigment Academy film.
Seasonal Differencing
Seasonal Differencing is a built-in model that computes the mean of differences between corresponding seasonal periods and assumes this difference will persist into the future.
Our model uses the classical seasonal differencing approach where it:
Calculates recent seasonal differences:
T = index of the last observed data point
s = seasonal length (e.g. 52 weeks)
m = moving average window
Takes the mean of these differences as the forecast increment
Applies this constant difference to smoothed seasonal patterns
where:
ỹ = smoothed seasonal pattern from the last season
s(h) = position within the season for forecast step h
k(h) = ⌈h/s⌉ = number of seasons ahead
This is a much more straightforward and robust approach than regression, especially for shorter time series where trend estimation might be unreliable.
How accurate are forecasting models?
Forecasting accuracy is highly context-dependent. It varies depending on the type of data, the forecasting horizon, and the external factors influencing your business. There is no single model that always performs best. The most reliable way to evaluate a model’s accuracy for your data is to measure its performance on your own historical data.
For research purposes, we built a benchmark dataset to compare the performance of different forecasting models and guide our roadmap toward implementing the most accurate ones. This benchmark is composed exclusively of data that best reflects typical customer use cases—in particular financial metrics such as sales—and does not include any customer data.
We use the median MASE (Mean Absolute Scaled Error) across all time series as our primary metric for comparing models.
A median MASE below 1 means the model performs better than the Seasonal Naïve baseline.
A median MASE above 1 means the model performs worse than this baseline.
Model performance on monthly forecasts (12-months horizon)
Model | Median MASE |
|---|---|
Model performance on weekly forecasts (12-weeks horizon)
Model | Median MASE |
|---|---|
Model performance on daily forecasts (91-days horizon)
Model | Median MASE |
|---|---|