| huseinzol05/Stock-Prediction-Models |
6,233 |
|
0 |
0 |
almost 3 years ago |
0 |
|
46 |
apache-2.0 |
Jupyter Notebook |
| Gathers machine learning and deep learning models for Stock forecasting including trading bots and simulations |
| ShangtongZhang/DeepRL |
2,834 |
|
0 |
0 |
over 3 years ago |
0 |
|
5 |
mit |
Python |
| Modularized Implementation of Deep RL Algorithms in PyTorch |
| sweetice/Deep-reinforcement-learning-with-pytorch |
2,741 |
|
0 |
0 |
about 3 years ago |
0 |
|
26 |
mit |
Python |
| PyTorch implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and .... |
| megvii-research/ICCV2019-LearningToPaint |
2,125 |
|
0 |
0 |
over 3 years ago |
0 |
|
0 |
mit |
Python |
| ICCV2019 - Learning to Paint With Model-based Deep Reinforcement Learning |
| rail-berkeley/softlearning |
1,108 |
|
0 |
0 |
over 2 years ago |
0 |
|
53 |
other |
Python |
| Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm. |
| ikostrikov/pytorch-a3c |
768 |
|
0 |
0 |
over 6 years ago |
0 |
|
17 |
mit |
Python |
| PyTorch implementation of Asynchronous Advantage Actor Critic (A3C) from "Asynchronous Methods for Deep Reinforcement Learning". |
| uvipen/Super-mario-bros-A3C-pytorch |
735 |
|
0 |
0 |
about 5 years ago |
0 |
|
10 |
mit |
Python |
| Asynchronous Advantage Actor-Critic (A3C) algorithm for Super Mario Bros |
| mimoralea/gdrl |
422 |
|
0 |
0 |
about 4 years ago |
0 |
|
0 |
bsd-3-clause |
Jupyter Notebook |
| Grokking Deep Reinforcement Learning |
| keiohta/tf2rl |
408 |
|
0 |
0 |
about 4 years ago |
23 |
June 18, 2021 |
36 |
mit |
Python |
| TensorFlow2 Reinforcement Learning |
| SforAiDl/genrl |
375 |
|
0 |
0 |
about 4 years ago |
4 |
March 31, 2020 |
52 |
mit |
Python |
| A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL |