2 Related Works and 2.1 Deep Reinforcement Learning Algorithms
2.2 Deep Reinforcement Learning Libraries and 2.3 Deep Reinforcement Learning in Finance
3 The Proposed FinRL Framework and 3.1 Overview of FinRL Framework
3.5 Training-Testing-Trading Pipeline
4 Hands-on Tutorials and Benchmark Performance and 4.1 Backtesting Module
4.2 Baseline Strategies and Trading Metrics
4.5 Use Case II: Portfolio Allocation and 4.6 Use Case III: Cryptocurrencies Trading
5 Ecosystem of FinRL and Conclusions, and References
In this paper, we have developed an open-source framework, FinRL, to help quantitative traders overcome the steep learning curve. Customization is accessible on all layers, from market environments, trading agents up towards trading tasks. FinRL follows a training testing-trading pipeline to reduce the simulation-to-reality gap. Within FinRL, historical market data and live trading platforms are reconfigured into standardized environments in OpenAI gym-style; state-of-the-art DRL algorithms are implemented for users to train trading agents in a pipeline; and an automated backtesting module is provided to evaluate trading performance. Moreover, benchmark schemes on typical trading tasks are provided as practitioners’ stepping stones.
Ecosystem of FinRL Framework. We believe that the rise of the open-source community fostered the development of AI in Finance for both academia and industry side. As the need of utilizing open-source AI for finance ecosystem is imminent in the finance community, FinRL provides a ecosystem that features Deep Reinforcement Learning in finance comprehensively to fulfill such need for all-level users in our open-source community.
FinRL offers an overall framework to utilize DRL agents for various markets, SOTA DRL algorithms, finance tasks (portfolio allocation, cryptocurrency trading, high-frequency trading), live trading support, etc. For entry-level users, FinRL aims to provide a demonstrative and educational atmosphere with hands-on documents to help beginners get familiar with DRL in Finance applications. For intermediate-level users, such as full-stack developers and professionals, FinRL provides ElegantRL [28], a lightweight and scalable DRL library for FinRL with finance-oriented optimizations. For advanced-level users, such as investment banks and hedge funds. FinRL delivers FinRL-Podracer [24, 29], a cloud-native solution for FinRL with high performance and high scalability training.
FinRL also develops other useful tools to support the ecosystem. FinRL-Meta [30] adds financial data engineering for FinRL with unified data processor and hundreds of market environments. Explainable DRL for portfolio management [17] and DRL ensemble strategy for stock trading [50, 51] are also implemented.
Future work. Future research directions would be investiaging DRL’s potential on limit order book [48], hedging [6], market making [16], liquidation [3], and trade execution [27].
[1] Joshua Achiam. 2018. Spinning Up in Deep Reinforcement Learning. https: //spinningup.openai.com
[2] Andrew Ang. August 10, 2012. Mean-variance investing. Columbia Business School Research Paper No. 12/49. (August 10, 2012).
[3] Wenhang Bao and Xiao-Yang Liu. 2019. Multi-agent deep reinforcement learning for liquidation strategy analysis. ICML Workshop on Applications and Infrastructure for Multi-Agent Learning (2019).
[4] Stelios D Bekiros. 2010. Fuzzy adaptive decision-making for boundedly rational traders in speculative stock markets. European Journal of Operational Research 202, 1 (2010), 285–293.
[5] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. OpenAI Gym. arXiv preprint arXiv:1606.01540 (2016).
[6] Hans Buehler, Lukas Gonon, Josef Teichmann, Ben Wood, Baranidharan Mohan, and Jonathan Kochems. 2019. Deep hedging: Hedging derivatives under generic market frictions using reinforcement learning. Swiss Finance Institute Research Paper 19-80 (2019).
[7] Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. 2018. Dopamine: A research framework for deep reinforcement learning. http://arxiv.org/abs/1812.06110 (2018).
[8] Ltd China Securities Index Co. 2017. CSI 300. http://www.csindex.com.cn/ uploads/indices/detail/files/en/145_000300_Fact_Sheet_en.pdf
[9] Yue Deng, Feng Bao, Youyong Kong, Zhiquan Ren, and Qionghai Dai. 2016. Deep direct reinforcement learning for financial signal representation and trading. IEEE Transactions on Neural Networks and Learning Systems 28, 3 (2016), 653–664.
[10] Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. 2017. OpenAI baselines. https://github.com/openai/baselines.
[11] Hao Dong, Akara Supratak, Luo Mai, Fangde Liu, Axel Oehmichen, Simiao Yu, and Yike Guo. 2017. TensorLayer: A versatile library for efficient deep learning development. In Proceedings of the 25th ACM International Conference on Multimedia. 1201–1204.
[12] Shanghai Stock Exchange. 2018. SSE 180 Index Methodology. http://www.sse.com.cn/market/sseindex/indexlist/indexdetails/indexmethods/ c/IndexHandbook_EN_SSE180.pdf
[13] Thomas G. Fischer. 2018. Reinforcement learning in financial markets - a survey. FAU Discussion Papers in Economics. Friedrich-Alexander University ErlangenNuremberg, Institute for Economics.
[14] Scott Fujimoto, Herke Van Hoof, and David Meger. 2018. Addressing function approximation error in actor-critic methods. International Conference on Machine Learning (2018).
[15] Prakhar Ganesh and Puneet Rakheja. 2018. Deep reinforcement learning in high frequency trading. ArXiv abs/1809.01506 (2018).
[16] Sumitra Ganesh, Nelson Vadori, Mengda Xu, Hua Zheng, Prashant Reddy, and Manuela Veloso. 2019. Reinforcement Learning for Market Making in a Multiagent Dealer Market. NeurIPS’19 Workshop on Robust AI in Financial Services.
[17] Mao Guan and Xiao-Yang Liu. 2021. Explainable Deep Reinforcement Learning for Portfolio Management: An Empirical Approach. ACM International Conference on AI in Finance (ICAIF) (2021).
[18] Chien Yi Huang. 2018. Financial trading as a game: A deep reinforcement learning approach. arXiv preprint arXiv:1807.02787 (2018).
[19] Hang Seng Index. 2020. Hang Seng Index and Sub-indexes. https://www.hsi. com.hk/eng/indexes/all-indexes/hsi
[20] Zhengyao Jiang and J. Liang. 2017. Cryptocurrency portfolio management with deep reinforcement learning. Intelligent Systems Conference (IntelliSys) (2017), 905–913.
[21] Zhengyao Jiang, Dixing Xu, and J. Liang. 2017. A deep reinforcement learning framework for the financial portfolio management problem. ArXiv abs/1706.10059.
[22] Prahlad Koratamaddi, Karan Wadhwani, Mridul Gupta, and Sriram G. Sanjeevi. 2021. Market sentiment-aware deep reinforcement learning approach for stock portfolio allocation. Engineering Science and Technology, an International Journal.
[23] Mark Kritzman and Yuanzhen Li. 2010. Skulls, financial turbulence, and risk management. Financial Analysts Journal 66 (10 2010).
[24] Zechu Li, Xiao-Yang Liu, jiahao Zheng, Zhaoran Wang, Anwar Walid, and Jian Guo. 2021. FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance. ACM International Conference on AI in Finance (ICAIF) (2021).
[25] Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, and Ion Stoica. 2018. RLlib: Abstractions for Distributed Reinforcement Learning. In International Conference on Machine Learning (ICML).
[26] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2016. Continuous control with deep reinforcement learning. ICLR (2016).
[27] Siyu Lin and P. Beling. 2020. A Deep Reinforcement Learning Framework for Optimal Trade Execution. In ECML/PKDD.
[28] Xiao-Yang Liu, Zechu Li, Zhaoran Wang, and Jiahao Zheng. 2021. ElegantRL: A scalable and elastic deep reinforcement learning library. https://github.com/ AI4Finance-Foundation/ElegantRL.
[29] Xiao-Yang Liu, Zechu Li, Zhuoran Yang, Jiahao Zheng, Zhaoran Wang, Anwar Walid, Jian Guo, and Michael Jordan. 2021. ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning. Deep RL Workshop, NeurIPS 2021 (2021).
[30] Xiao-Yang Liu, Jingyang Rui, Jiechao Gao, Liuqing Yang, Hongyang Yang, Zhaoran Wang, Christina Dan Wang, and Guo Jian. 2021. Data-Driven Deep Reinforcement Learning in Quantitative Finance. Data-Centric AI Workshop, NeurIPS.
[31] B. G. Malkiel. 2003. Passive investment strategies and efficient markets. European Financial Management 9 (2003), 1–10.
[32] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning. 1928–1937.
[33] John Moody and Matthew Saffell. 2001. Learning to trade via direct reinforcement. IEEE Transactions on Neural Networks 12, 4 (2001), 875–889.
[34] J. Moody, L. Wu, Y. Liao, and M. Saffell. 1998. Performance functions and reinforcement learning for trading systems and portfolios. Journal of Forecasting 17 (1998), 441–470.
[35] Abhishek Nan, Anandh Perumal, and Osmar R Zaiane. 2020. Sentiment and knowledge based algorithmic trading with deep reinforcement learning. ArXiv abs/2001.09403 (2020).
[36] Quantopian. 2019. Pyfolio: A toolkit for Portfolio and risk analytics in Python. https://github.com/quantopian/pyfolio.
[37] Antonin Raffin, Ashley Hill, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, and Noah Dormann. 2019. Stable Baselines3. https://github.com/DLR-RM/stablebaselines3.
[38] Francesco Rundo. 2019. Deep LSTM with reinforcement learning layer for financial trend prediction in FX high frequency trading systems. Applied Sciences 9 (10 2019), 1–18.
[39] Jonathan Sadighian. 2019. Deep reinforcement learning in Cryptocurrency market making. arXiv: Trading and Market Microstructure (2019).
[40] Svetlana Sapuric and A. Kokkinaki. 2014. Bitcoin is volatile! Isn’t that right?. In BIS.
[41] Otabek Sattarov, Azamjon Muminov, Cheol Lee, Hyun Kang, Ryumduck Oh, Junho Ahn, Hyung Oh, and Heung Jeon. 2020. Recommending cryptocurrency trading points with deep reinforcement learning approach. Applied Sciences 10.
[42] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
[43] William F Sharpe. 1970. Portfolio theory and capital markets. McGraw-Hill College.
[44] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484.
[45] Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12. MIT Press, 1057–1063.
[46] Evgeni B. Tarassov. 2016. Exchange traded funds (ETF): History, mechanism, academic literature review and research perspectives. Microeconomics: General Equilibrium & Disequilibrium Models of Financial Markets eJournal (2016).
[47] Nelson Vadori, Sumitra Ganesh, Prashant Reddy, and Manuela Veloso. 2020. Risksensitive reinforcement learning: a martingale approach to reward uncertainty. International Conference on AI in Finance (ICAIF) (2020).
[48] Svitlana Vyetrenko, David Byrd, Nick Petosa, Mahmoud Mahfouz, Danial Dervovic, Manuela Veloso, and Tucker Hybinette Balch. 2020. Get real: Realism metrics for robust limit order book market simulations. International Conference on AI in Finance (ICAIF) (2020).
[49] Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine Learning 8, 3-4 (1992), 279–292.
[50] Zhuoran Xiong, Xiao-Yang Liu, Shan Zhong, Hongyang Yang, and Anwar Walid. 2018. Practical deep reinforcement learning approach for stock trading. NeurIPS Workshop (2018).
[51] Hongyang Yang, Xiao-Yang Liu, Shan Zhong, and Anwar Walid. 2020. Deep reinforcement learning for automated stock trading: An ensemble strategy. ACM International Conference on AI in Finance (ICAIF) (2020).
[52] Daochen Zha, Kwei-Herng Lai, Kaixiong Zhou, and X. X. Hu. 2019. Experience replay optimization. International Joint Conference on Artificial Intelligence (IJCAI).
[53] Yong Zhang and Xingyu Yang. 2017. Online portfolio selection strategy based on combining experts’ advice. Computational Economics 50, 1 (2017), 141–159.
[54] Zihao Zhang, Stefan Zohren, and Stephen Roberts. 2020. Deep reinforcement learning for trading. The Journal of Financial Data Science 2, 2 (2020), 25–40.