| cedrickchee/awesome-transformer-nlp |
991 |
|
0 |
0 |
over 2 years ago |
0 |
|
1 |
mit |
|
| A curated list of NLP resources focused on Transformer networks, attention mechanism, GPT, BERT, ChatGPT, LLMs, and transfer learning. |
| ZixuanKe/PyContinual |
185 |
|
0 |
0 |
about 3 years ago |
0 |
|
1 |
|
Python |
| PyContinual (An Easy and Extendible Framework for Continual Learning) |
| somosnlp/nlp-de-cero-a-cien |
135 |
|
0 |
0 |
over 3 years ago |
0 |
|
1 |
|
Jupyter Notebook |
| Curso práctico: NLP de cero a cien 🤗 |
| densechen/AReLU |
58 |
|
0 |
1 |
over 4 years ago |
1 |
January 23, 2021 |
1 |
mit |
Jupyter Notebook |
| AReLU: Attention-based-Rectified-Linear-Unit |
| emadeldeen24/ADAST |
23 |
|
0 |
0 |
over 2 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| [IEEE TETCI] "ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training" |
| jimmyg1997/NTUA-slp-nlp |
13 |
|
0 |
0 |
almost 6 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| 💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA |
| marcomoldovan/hierarchical-language-modeling |
5 |
|
0 |
0 |
almost 3 years ago |
0 |
|
2 |
mit |
Jupyter Notebook |
| We address the task of learning contextualized word, sentence and document representations with a hierarchical language model by stacking Transformer-based encoders on a sentence level and subsequently on a document level and performing masked token prediction. |