For energy traders, risk managers, data analysts, market makers and all who want to control exposure to energy price volatility.
Two models:
Two apps:
We use AI for highly accurate probabilistic forecasting of time series data. Our technology helps businesses in sectors such as energy, finance and logistics manage uncertainty with precision and confidence. We provide you with software that calculates the probability distribution of key variables that can be described by time series (prices, temperatures, consumptions, queuing, etc.).
Based in Porto, Portugal, we operate from UPTEC, the Technology Park of the University of Porto.
By simulating a massive amount of scenarios, we provide a complete probabilistic description of the future of your key variable, forecasting expected behaviour and extreme events alike.
Drawing on scientific literature and proprietary techniques, our approach ensures that every algorithm we use is statistically robust, maintaining the highest standards of precision and accuracy.
We provide you with all the simulation data for your analysts to use in every possible way. For simpler analysis, we also provide dashboards with the important forecast statistics.
The distinguishing factor of our technology is how it creates models for probabilistic forecasting: our models learn a probability distribution for every future time point we want to predict. This is much more powerful than simply calculating an expected value, as in more traditional machine learning applications.
Through continuous research, we are always improving our methods and techniques.
We start by collecting and analyzing historical data of the key variable to be modeled. We conduct a thorough quantitative analysis to identify statistical patterns and validate them with insights from relevant business experts.
We design a stochastic model that accurately captures the observed dynamics, taking into account both the hidden patterns and the randomness of the data. Properly modeling the random fluctuations is critical.
We experiment and select the best machine learning architecture with our statistical testing tools. We continuously calibrate model parameters based on the input data to update model information while preserving the original dynamical behavior.
Using advanced simulation techniques, we construct a probability distribution that predicts the potential ways the key variable can evolve and the likelihood of each scenario. This probability distribution is the core output of the model.
Finally, we use the calculated probability distribution to make forecasts for any metric that is directly dependent on the modeled key variable, obtaining results with great accuracy.
Our probabilistic forecasting models predict the probability distribution of the future value of a chosen key variable. In contrast, standard forecasting applications focus on obtaining a model that only predicts the expected value of the variable. Let's use rain forecasting as an example: "What is the expected rain volume for next month?" The answer from a typical forecasting model could be something like "155mm". In comparison, our probabilistic forecasting models provide the full probability distribution with every possible value and its likelihood ("probability of raining between 140mm and 150mm is 30% and between 150mm and 160mm it's 35%"). This allows us to ask questions such as "What's the highest possible rain volume that will happen only in the worst 1% of cases?".
Since we provide you the probability distribution of the future values, you are able to anticipate different scenarios and prepare for them. Depending on how the different scenarios might affect you, but also on how likely they are, you can manage your risk by implementing strategies that mitigate possible losses from extreme events. Common ways of doing this include monitoring "Value-at-Risk" or ("delta") hedging the exposure to specific risk factors. The more adept you are at using analytics, the more you can potentially extract from the simulation data we provide. We also work with you to optimize the ways in which you can use the data within your specific context.
The models do not predict what is going to happen but what can happen and the probability of it. They work with the probability of quantitative fluctuations, learning directly from the numerical data. As an example, electricity prices might rise if there's an unexpected incident at a big power plant; such an incident could cause a 50% rise in the spot price. The models would not predict that this specific power plant would have this incident, but they would calculate in advance what would have been the probability of a 50% upwards shock in the price. This shock could have come from a different source instead of the power plant. What matters to the models are the numerical values and not the qualitative origin of the fluctuations.
No - our technology can potentially be applied to all numerical time series. The methodology behind how we build models is not to build one single model that works for everything (which we find to be unrealistic) but to develop multiple techniques to build models that adapt to different data. The fundamental step is to compare them statistically in order to select the best one for a given application. ESPF is the first commercial application of the Astrolabium technology but it's not meant to be the only one - more use cases are being tested and investigated.
We need at least a big sample of the past values of the time series we are modeling. Since we focus on predicting the probability of events, including unlikely ones, there is a minimum sample size needed for acceptable accuracy, and this depends on what is being modelled. The bigger the dataset and the more recent it is, the more accurate the predictions of the models will be. The data is the most important thing in building accurate models. In some cases where high correlation justifies it, auxiliary time series might be used as well.
We offer different ways of providing you the forecasts. The most comprehensive way is through direct access to an API that allows you to download all the simulations that the model performs. This way you are able to use them within your analytics, risk management and trading departments to perform your own ultra detailed analyses. This approach allows you to construct all the possible statistical metrics and reports you want, as well as allowing you to simulate strategies and their results. In contrast, the most convenient way is through a simple dashboard where you will see the most important metrics that are usually used for forecasting (mean, standard deviation, quantiles, ...) for every future time point being predicted. This approach targets speed and ease of use, and allows you to bypass the more complex digital infrastructure requirements and immediately start using the forecasts.
We constantly re-train and re-calibrate our models as time passes and more data becomes available. The models increasingly converge to more accurate forecasts as more data is used in this process. Additionally, we take great care in developing robust statistical testing tools that we routinely use to backtest our models to ensure their accuracy remains high. Finally, we are constantly updating our knowledge with scientific literature and working on developing new modeling techniques. Due to this, it is common for us to update our models when we develop and test a technique that has higher accuracy than what the current models are using.