What is Flow?
Flow is a deep reinforcement learning framework for mixed autonomy traffic. Flow is a traffic control benchmarking framework and it provides a suite of traffic control scenarios (benchmarks), tools for designing custom traffic scenarios, and integration with deep reinforcement learning and traffic microsimulation libraries.
Is there any installation instructions?
Yes! Please refer to our Installation instructions.
Does Flow have proper documentation?
Yes! In fact Flow has rich documentaion and we frequently update them. Please refer to Flow documentation.
How can I ask my questions?
Please direct technical questions to the project Slack. If you have a non-technical inquiry, please send us an email.
Is Flow open-source? What is the license?
Yes! Flow is open-source for public use, and it is licensed under the MIT license.
How can I report a bug if I find a bug?
You can report bugs by submitting GitHub issues. To submit a github issue, please click here
I really need to see some tutorials before I start. Do you also have tutorials?
Yes! Please check our Tutorial Page to get started with deep reinforcement learning and transportation. We also have Python Jupyter Tutorials for Flow.
I really would like to contribute to this project. How can I do that?
Thank you for your interest in contributing to the project Flow. Please Submit your contributions using Github pull requests. Click here to submit a new pull request.
Should I cite any paper or give acknowledgements if I use Flow?
If you use Flow for academic research, you are highly encouraged to cite this paper:
C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, A. Bayen, "Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control," CoRR, vol. abs/1710.05465, 2017. Available: Arxive
If you use the benchmarks, you are highly encouraged to cite this paper:
E. Vinitsky, A. Kreidieh, L. L. Flem, N. Kheterpal, K. Jang, C. Wu, F. Wu, R. Liaw, E. Liang, A. M. Bayen. "Benchmarks for reinforcement learning in mixed-autonomy traffic". In Conference on Robot Learning (pp. 399-409). Available: PMLR