Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting
Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, Yuyang Wang
The authors demonstrate how an unconditionally trained generative model can be just as useful as task specific conditional models for various tasks without needing to adapt its training process.
For this purpose, they introduce TSDiff, an unconditionally trained diffusion model for time series and show how it can be used to:
- Sample from a conditional distribution despite not being trained on it;
- Enhance predictions from other forecasting models by formulating forecast enhancement as a regularized optimization problem using its learned likelihood function;
- Provide better synthetic data for downstream forecasting than other time series based generative models.
The authors conduct experiments for each of these properties, comparing TSDiff’s performance against statistical and probabilistic models tailored to specific tasks. The results consistently demonstrate that TSDiff performs at least as well as, if not better than, the selected alternative models for each respective task.
Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting