| germain-hug/Deep-RL-Keras |
507 |
|
0 |
0 |
almost 6 years ago |
0 |
|
12 |
|
Python |
| Keras Implementation of popular Deep RL Algorithms (A3C, DDQN, DDPG, Dueling DDQN) |
| Grzego/async-rl |
44 |
|
0 |
0 |
about 8 years ago |
0 |
|
2 |
mit |
Python |
| Variation of "Asynchronous Methods for Deep Reinforcement Learning" with multiple processes generating experience for agent (Keras + Theano + OpenAI Gym)[1-step Q-learning, n-step Q-learning, A3C] |
| shalabhsingh/A3C_Keras_FlappyBird |
32 |
|
0 |
0 |
over 8 years ago |
0 |
|
0 |
mit |
Python |
| Use Asynchronous advantage actor-critic algorithm (A3C) to play Flappy Bird using Keras |
| calclavia/rl |
14 |
|
0 |
0 |
about 9 years ago |
0 |
|
6 |
|
Python |
| Reinforcement learning algorithms implemented using Keras and OpenAI Gym |
| amar-iastate/L2RPN-using-A3C |
10 |
|
0 |
0 |
almost 7 years ago |
0 |
|
2 |
lgpl-3.0 |
Python |
| Reinforcement Learning using the Actor-Critic framework for the L2RPN challenge (https://l2rpn.chalearn.org/ & https://competitions.codalab.org/competitions/22845#learn_the_details-overview). The agent trained using this code was one of the winners of the challenge. The code runs on the pypownet environment (https://github.com/MarvinLer/pypownet). It is released under a license of LGPLv3 |