Quota Market Modelling

Quota markets are increasingly used to ration power market and environmental goods, including capacity, emissions permits and renewable subsidies. The obvious and most international of these markets is the EU ETS; however there are many more including the Swedish-Nordic El-Cert market, and the increasing number of capacity auctions and certificate markets popping up in Europe and elsewhere.

One characteristic of these markets is that the good that is traded is “artificial” – and typically the demand for it has been legislated by government. This means that the demand for the good is defined by rules, not by want or need, and thus behaves quite differently to “normal” markets. Quota markets are often either long (there are too many quotas or certificates) to meet the legislated demand, or short (there are not enough). If long, the price is zero. If short, then the price is set by a legislated penalty for not having enough quotas. Pricing in between is due to uncertainty – you don’t know whether the market is long or short. This dynamic results in behaviour like big swings in prices, or – often – price hikes, then price collapses as the market looks long, followed by rule changes to support price levels.

This means that such markets are not able to be analysed that effectively by traditional economic models that are based on the idea of equilibrium. Quota markets are not in equilibrium, so should not be analysed as if they were.

Instead, at Optimeering we attack this problem via the use of intelligent agent-based models, that combine AI techniques with modelling of actual market actors to simulate market behaviour under realistic market conditions. Together with Thema Consulting Group, we developed the MARC model for the Swedish-Norwegian El-cert market, that has been used by a range of developers, regulators and operators to better understand and predict future market outturns. We have a blog about the MARC model here (in Norwegian). To learn more how we can use agent modelling to help you better understand the quota markets that drive your bottom line, contact us here.

Market Clearing – aFRR

New markets for reserve services and near-real-time power delivery require new, sophisticated tools for calculating optimal market clearing. In theory clearing a market is simple – you just need to match supply and demand. In practice though, complex bid and offer structures as well as integration between reserve services often mean that things are not quite so straight forward. Getting it right though is very important – clearing and pricing new markets reliably, transparently and quickly is essential for efficient market operation.

Optimeering has a unique combination of modelling expertise and market know-how that positions us to deliver robust, customised market clearing tools for any type of power market. Recently, we have developed the clearing algorithm for the upcoming pan-Nordic aFRR market for the SVK, Statnett, Energinet.dk and Fingrid TSO consortia.

The four Nordic TSOs are planning to implement a common Nordic capacity market for aFRR (automatic frequency restoration reserves) in 2018, and our part of this work is developing and implementing the market clearing engine used to select that combination of bids that is most efficient (“maximises social welfare surplus”). Given that the bidding rules are quite complex (bids can be linked upwards, downwards, and in time, marked as indivisible, and be asymmetrical, to mention some) and the requirement to ensure that the market operates in a socio-economically efficient manner, the problem is not easy.

The problem itself is what we call a combinatorial optimization problem or (mixed) integer program. The linking and non-divisibility of bids is a critical and fundamental characteristic of the bid selection problem that means traditional clearing methods based on hourly bid price alone are insufficient. Given the size of the problem (hourly bids in multiple bidding zones), checking every single possible combination of bids is also not a feasible approach. Instead, a solution algorithm is needed to select bids for the aFRR market that accounts for the complex bid structures via the use of advanced mathematical optimization techniques. We have come up with some clever formulation approaches (if we do say so ourselves …) that take advantage of structure in the problem to clear the market optimally and very quickly (here we mean seconds, not minutes or hours). Our approach also works for several alternative pricing mechanisms, including pay-as-bid and marginal-cost pricing.

If you are interested in learning more about the aFRR market, or how we can help in other markets, please contact us.

The PUMA Algorithms

We are getting close to a Beta release of our first member of the PUMA Algorithm’s family, and I wanted to update everyone on what to expect and on progress so far.

The PUMA Algorithms are advanced models for the analysis and prognosis of power markets, that take detailed account of future uncertainty in input levels such as hydro inflow, demand, fuel and emissions prices. The first PUMA Algorithm, PUMA SP, is being developed as part of the PUMA Research Project sponsored by Research Council of Norway, and paid for by several major actors in the Nordic power market.

PUMA SP is a fundamental medium term time-frame model that captures the impact of multiple uncertainty drivers such as inflows, availabilities, demand, fuel and CO2 prices. In modelling something as complex as a power market, you have to make trade offs. Speed verses detail. Detailed modelling of hydro verses detailed modelling of CHP. Uncertainty verses perfect foresight. With PUMA SP we have taken the view that the user is best positioned to make these trade-offs, as they can change from analysis to analysis.

PUMA SP is therefore designed with flexibility and ease-of-use in mind, and can be configured to run at whatever level of detail you need. Want to run deterministically? Just one parameter. Uncertain inflows and fuel prices? The same. Add in uncertain demand? No problem. One nice thing – once you have one uncertain parameter, adding new ones does not add much solve overhead. So, if you need “quick-and-dirty”, that’s what you can have, whilst being able to use the same model and data later on to refine and model a fully detailed market response.

All Puma Algorithms are fully integrated with the PUMA Analytic Framework, and are provided as python packages and with a browser-based front-end. PUMA SP is currently in locked alpha testing with our development license group, but we are planning for it to be available to new customers in Beta at the end of 2017. Drop us a line to find out more.

PUMA Analytic Framework

Most of the models we build use quite a lot of data, and especially time series or time-stamped data. However, many of them don’t use huge, enormous petabytes of data. A data lake, not a data ocean. Handling this can be a bit tricky – spreadsheets are much too limited, and relational databases are generally too slow to read and (especially) write even moderate volumes of time series data. At the other end of the scale, large no-sql data solutions are just too complex and unwieldy, like trying to crack a nut with a hammer.

We needed something that could read and write time series and time-stamped data quickly; could classify, group and tag series (“inflow series”, “cases” or “scenarios”); could extract series even with missing or incomplete data; and had an overhead low enough that it could be installed on a desktop and accessed via the python tools we use every day.

We couldn’t find it, so we decided to build our own – the Puma Analytic Framework, for the flexible and rapid storage, retrieval, visualisation and manipulation of time series and time-stamped data. Implemented as a python package, the Framework uses a combination of a relational database and fast flat file storage and retrieval to create a tool to easily and cheaply store pretty large data volumes on the desktop, and use them in your data analysis and models. Being a python package, you have immediate access to all your usual python tools, such as numpy, scipy, pandas and Jupyter Notebooks.

The Framework makes developing new models and analytic tools easier and faster, by providing data in a readily accessible, standardised format. We use the Framework to store and prepare data for our own PUMA Algorithms power market model, and in the same way it can be fully integrated with your own existing and 3rd party power market modelling tools. This is one of the big advantages of the PUMA Analytic Framework in our opinion – it enables you to have a single, consistent database for all your models.

The PUMA Analytic Framework is in locked alpha testing and will be made available in Beta shortly, after which you will be able to download the package and documentation from our website for free. Email us in the meantime if you are interested in finding out more.