| princewen/tensorflow_practice |
6,377 |
|
0 |
0 |
over 2 years ago |
0 |
|
46 |
|
Python |
| tensorflow实战练习,包括强化学习、推荐系统、nlp等 |
| bentrevett/pytorch-seq2seq |
5,024 |
|
0 |
0 |
about 2 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText. |
| spro/practical-pytorch |
4,272 |
|
0 |
0 |
almost 5 years ago |
0 |
|
91 |
mit |
Jupyter Notebook |
| Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained |
| NELSONZHAO/zhihu |
3,228 |
|
0 |
0 |
almost 5 years ago |
0 |
|
31 |
|
Jupyter Notebook |
| This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code. |
| zzw922cn/awesome-speech-recognition-speech-synthesis-papers |
2,792 |
|
0 |
0 |
over 2 years ago |
0 |
|
2 |
mit |
|
| Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC) |
| golbin/TensorFlow-Tutorials |
2,108 |
|
0 |
0 |
over 3 years ago |
0 |
|
23 |
|
Python |
| 텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다 |
| fendouai/PyTorchDocs |
1,743 |
|
0 |
0 |
about 4 years ago |
0 |
|
5 |
|
Python |
| PyTorch 官方中文教程包含 60 分钟快速入门教程,强化教程,计算机视觉,自然语言处理,生成对抗网络,强化学习。欢迎 Star,Fork! |
| Arturus/kaggle-web-traffic |
1,402 |
|
0 |
0 |
over 7 years ago |
0 |
|
8 |
mit |
Jupyter Notebook |
| 1st place solution |
| guillaume-chevalier/seq2seq-signal-prediction |
1,036 |
|
0 |
0 |
over 3 years ago |
0 |
|
10 |
apache-2.0 |
Jupyter Notebook |
| Signal forecasting with a Sequence-to-Sequence (seq2seq) Recurrent Neural Network (RNN) model in TensorFlow - Guillaume Chevalier |
| RUCAIBox/TextBox |
966 |
|
0 |
0 |
almost 3 years ago |
10 |
April 15, 2021 |
3 |
mit |
Python |
| TextBox 2.0 is a text generation library with pre-trained language models |