| bentrevett/pytorch-seq2seq |
5,024 |
|
0 |
0 |
about 2 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText. |
| huseinzol05/NLP-Models-Tensorflow |
1,329 |
|
0 |
0 |
over 5 years ago |
0 |
|
3 |
mit |
Jupyter Notebook |
| Gathers machine learning and Tensorflow deep learning models for NLP problems, 1.13 < Tensorflow < 2.0 |
| thushv89/attention_keras |
429 |
|
0 |
0 |
about 3 years ago |
0 |
|
11 |
mit |
Python |
| Keras Layer implementation of Attention for Sequential models |
| cheng6076/SNLI-attention |
228 |
|
0 |
0 |
over 9 years ago |
0 |
|
2 |
|
Lua |
| SNLI with word-word attention by LSTM encoder-decoder |
| erickrf/autoencoder |
207 |
|
0 |
0 |
almost 7 years ago |
0 |
|
2 |
mit |
Python |
| Text autoencoder with LSTMs |
| chenjun2hao/Attention_ocr.pytorch |
197 |
|
0 |
0 |
about 7 years ago |
0 |
|
23 |
|
Python |
| This repository implements the the encoder and decoder model with attention model for OCR |
| zhongkaifu/Seq2SeqSharp |
188 |
|
0 |
3 |
over 2 years ago |
1 |
May 09, 2022 |
6 |
other |
C# |
| Seq2SeqSharp is a tensor based fast & flexible deep neural network framework written by .NET (C#). It has many highlighted features, such as automatic differentiation, different network types (Transformer, LSTM, BiLSTM and so on), multi-GPUs supported, cross-platforms (Windows, Linux, x86, x64, ARM), multimodal model for text and images and so on. |
| Disiok/poetry-seq2seq |
154 |
|
0 |
0 |
over 8 years ago |
0 |
|
3 |
mit |
Jupyter Notebook |
| Chinese Poetry Generation |
| neural-nuts/image-caption-generator |
136 |
|
0 |
0 |
over 6 years ago |
0 |
|
12 |
bsd-3-clause |
Jupyter Notebook |
| [DEPRECATED] A Neural Network based generative model for captioning images using Tensorflow |
| cheng6076/Variational-LSTM-Autoencoder |
131 |
|
0 |
0 |
about 10 years ago |
0 |
|
4 |
|
Lua |
| Variational Seq2Seq model |