Title
Earning While Learning: An Adversarial Multi-Armed Bandit Based Real-Time Bidding Scheme in Deregulated Electricity Market
Abstract
As the specific incarnation of cyber-physical-social systems, in deregulated electricity market, the market gaming behaviors may have significantly affected the costs of electricity delivered to the market. Especially, from the supply side, the primary goal of power generating companies (PGCs) is to develop strategic biddings to maximize their profits in long-term trading, when facing intrinsic uncertainty. Typically, in such repeated and dynamic settings, one fundamental challenge is that, any PGC neither has prior knowledge about all unknown opponents’ incentives, nor observes their strategies and obtained profits. Especially, the common setting is that, once the bidding auction has occurred, the PGC only observes the market clearing price (MCP) at each round, and winning or losing status. While it is typical to assume some perfect or bounded rationality model of the PGCs, their real behaviors do not follow such assumptions due to lack of complete information, computational intractability, or lack of perfect execution, etc. We formulate the problem of sequentially optimizing any PGC's bids with an adversarial multi-armed bandit (MAB) model. Specifically, at each round, a PGC chooses to play against all other opponents from an infinite set of possible strategies that are split into continuous intervals by sequentially occurred MCPs. Then at the end of each round, the PGC observes the outcome of the auction and updates its estimation on the expected bid's fitness for each interval (i.e., how much the expected profit of the interval could be achieved), and selects the bid for the next round using the proposed algorithm Exp3C (i.e., exponential-weight for exploration and exploitation with continuous value). The experimental results based on real dataset demonstrate that Exp3C performs better than other heuristic schemes including pure greedy, <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">${\boldsymbol{\varepsilon}}$</tex-math></inline-formula> -greedy and MCP predication based bidding schemes. Moreover, we theoretically prove the upper bound of average Exp3C regret per round follows <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">${\boldsymbol{O}} ({2/\sqrt {\boldsymbol{T}} })$</tex-math></inline-formula> , where <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</i> is the number of total rounds. In summary, the proposed Exp3C has two distinguished advantages. First it is distributed, since its decisions uniquely depend on its past decisions and profits. Second, it is rational, since a PGC is given guarantees on its own accumulated profit regardless of other PGCs’ behaviors.
Year
DOI
Venue
2022
10.1109/TNSE.2022.3185060
IEEE Transactions on Network Science and Engineering
Keywords
DocType
Volume
Deregulated electricity market,no-regret learning,multi-armed bandit,strategic bidding
Journal
9
Issue
Citations 
PageRank 
6
0
0.34
References 
Authors
13
4
Name
Order
Citations
PageRank
Yufeng Wang100.34
Bo Zhang2419.80
Jianhua Ma31401148.82
Qun Jin400.34