| jadore801120/attention-is-all-you-need-pytorch |
7,910 |
|
0 |
0 |
over 2 years ago |
0 |
|
74 |
mit |
Python |
| A PyTorch implementation of the Transformer model in "Attention is All You Need". |
| tensorflow/nmt |
6,085 |
|
0 |
0 |
over 3 years ago |
0 |
|
275 |
apache-2.0 |
Python |
| TensorFlow Neural Machine Translation Tutorial |
| Kyubyong/transformer |
3,882 |
|
0 |
0 |
about 3 years ago |
0 |
|
134 |
apache-2.0 |
Python |
| A TensorFlow Implementation of the Transformer: Attention Is All You Need |
| harvardnlp/seq2seq-attn |
1,167 |
|
0 |
0 |
over 5 years ago |
0 |
|
14 |
mit |
Lua |
| Sequence-to-sequence model with LSTM encoder/decoders and attention |
| datalogue/keras-attention |
656 |
|
0 |
0 |
almost 7 years ago |
0 |
|
22 |
agpl-3.0 |
Python |
| Visualizing RNNs using the attention mechanism |
| Ha0Tang/AttentionGAN |
564 |
|
0 |
0 |
almost 3 years ago |
0 |
|
16 |
other |
Python |
| AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation |
| keon/seq2seq |
502 |
|
0 |
0 |
over 5 years ago |
0 |
|
11 |
mit |
Python |
| Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch |
| gordicaleksa/pytorch-original-transformer |
376 |
|
0 |
0 |
over 5 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models. |
| CyberZHG/keras-transformer |
312 |
|
13 |
3 |
about 4 years ago |
39 |
January 22, 2022 |
0 |
mit |
Python |
| Transformer implemented in Keras |
| DongjunLee/transformer-tensorflow |
304 |
|
0 |
0 |
almost 8 years ago |
0 |
|
5 |
|
Python |
| TensorFlow implementation of 'Attention Is All You Need (2017. 6)' |