This article outlines the various options available when configuring Predictions.
AutoML
AutoML is a feature designed to help you find the most accurate forecast for your data. With just one click, AutoML tests multiple forecasting models on your time series and automatically selects the one that performs best.
For each time series, AutoML generates predictions using several models, including:
AutoETS
Chronos-2
Linear Regression
Moving Average
Prophet
Seasonal Differencing
It then evaluates the accuracy of each model using MASE (for more information, see its Wikipedia page), a robust metric that compares prediction errors to a simple baseline (Seasonal Naïve forecast). Accuracy is measured through backtesting on the last 20% of your historical data, ensuring that the selected model is validated on recent trends.
By default, we recommend using AutoML. It offers the simplest and most reliable way to select the best forecast automatically, without requiring technical expertise.
ℹ️ Note
AutoML takes longer to compute, as it needs to run and evaluate multiple models for each time series to select the most accurate model.
AutoETS
AutoETS is an open-source model based on the same logic as the native FORECAST_ETS function. It automatically selects the best-fitting ETS model for your data. It optimizes the smoothing parameters—alpha (level), beta (trend), and gamma (seasonality)—to minimize forecasting error, without requiring any manual tuning. For more information, see the official documentation.
🎓 Pigment Academy
For an overview of forecasting with AutoETS, watch this Pigment Academy film.
Chronos-2
Chronos-2 is an open-source forecasting model developed by AWS AI Labs, designed to handle a wide variety of forecasting scenarios, including those with limited historical data, multiple seasonalities, and irregular patterns. Chronos-2 supports univariate forecasting as well as multivariate and covariate-informed forecasting tasks—a significant advance over prior models.
Chronos-2 leverages a foundation-model architecture and was pretrained on a massive corpus of time series data (real-world and synthetic), enabling it to generalize well even when your data is sparse or noisy.
In Pigment, we support Chronos-2 in zero-shot mode, meaning you can apply it out of the box, with minimal configuration. This lets you benefit from its strong generalization and forecasting capabilities without needing to fine-tune on your own data.
For more information, see the official Amazon documentation and this article.
Prophet
Prophet is an open-source forecasting model developed by Meta, designed to handle seasonality, trends, and holiday effects. For more information, see the official Meta documentation. Prophet handles a wide range of seasonal patterns and trend shifts effectively, making it a strong choice for most business forecasting scenarios.
🎓 Pigment Academy
For an overview of forecasting with Prophet, watch this Pigment Academy film.
Seasonal Differencing
Seasonal Differencing is a built-in model that computes the mean of differences between corresponding seasonal periods and assumes this difference will persist into the future.
Our model uses the classical seasonal differencing approach where it:
Calculates recent seasonal differences:
T = index of the last observed data point
s = seasonal length (e.g. 52 weeks)
m = moving average window
Takes the mean of these differences as the forecast increment
Applies this constant difference to smoothed seasonal patterns
where:
ỹ = smoothed seasonal pattern from the last season
s(h) = position within the season for forecast step h
k(h) = ⌈h/s⌉ = number of seasons ahead
This is a much more straightforward and robust approach than regression, especially for shorter time series where trend estimation might be unreliable.
How accurate are forecasting models?
Forecasting accuracy is highly context-dependent. It varies depending on the type of data, the forecasting horizon, and the external factors influencing your business. There is no single model that always performs best. The most reliable way to evaluate a model’s accuracy for your data is to measure its performance on your own historical data.
For research purposes, we built a benchmark dataset to compare the performance of different forecasting models and guide our roadmap toward implementing the most accurate ones. This benchmark is composed exclusively of data that best reflects typical customer use cases—in particular financial metrics such as sales—and does not include any customer data.
We use the average MASE (Mean Absolute Scaled Error) across all time series as our primary metric for comparing models.
An average MASE below 1 means the model performs better than the Seasonal Naïve baseline.
An average MASE above 1 means the model performs worse than this baseline.
Chronos-2 consistently outperforms all other models in our benchmark, both with and without covariates, and across most calendar granularities. These results align with findings from public benchmarks such as GIFT-EVAL.
Univariate Forecasting
Univariate forecasting consists in predicting a Metric using only its own historical values. It’s like forecasting sales using past sales data alone.
Model performance on monthly forecasts (12-months horizon):
Model | Average MASE |
|---|---|
Model performance on weekly forecasts (52-weeks horizon):
Model | Average MASE |
|---|---|
Model performance on daily forecasts (365-days horizon):
Model | Average MASE |
|---|---|
Covariate Forecasting
Covariate forecasting consists in predicting a Metric using its history plus other influencing factors. It’s like forecasting sales using past sales, promotions, prices, or holidays.
Model performance on monthly forecasts (12-months horizon):
Model | Average MASE |
|---|---|
Model performance on weekly forecasts (52-weeks horizon):
Model | Average MASE |
|---|---|
Model performance on daily forecasts (365-days horizon):
Model | Average MASE |
|---|---|