| yaohungt/Multimodal-Transformer |
418 |
|
0 |
0 |
over 4 years ago |
0 |
|
8 |
mit |
Python |
| [ACL'19] [PyTorch] Multimodal Transformer |
| PengboLiu/NLP-Papers |
248 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
|
|
| Papers I have read, mainly about NLP. Welcome everyone to supplement in issue. |
| bzhangGo/zero |
131 |
|
0 |
0 |
almost 3 years ago |
0 |
|
0 |
bsd-3-clause |
Python |
| Zero -- A neural machine translation system |
| ZhengZixiang/ATPapers |
105 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
|
|
| Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力机制、Transformer和预训练语言模型论文与相关资源集合 |
| ShilinHe/interpretableNLP |
70 |
|
0 |
0 |
over 5 years ago |
0 |
|
0 |
mit |
|
| A list of publications on NLP interpretability (Welcome PR) |
| hussam123/Text-Summarization |
34 |
|
0 |
0 |
almost 8 years ago |
0 |
|
0 |
|
|
| Abstractive and Extractive Deep Learning Methods for Text Summarisation |
| mourga/affective-attention |
26 |
|
0 |
0 |
about 6 years ago |
0 |
|
2 |
|
Python |
| Source code for the ACL 2019 paper "Attention-based Conditioning Methods for External Knowledge Integration" |
| utahnlp/layer_augmentation |
23 |
|
0 |
0 |
over 5 years ago |
0 |
|
4 |
apache-2.0 |
Python |
| Implementation of the NLI model in our ACL 2019 paper: Augmenting Neural Networks with First-order Logic. |
| SAP-samples/acl2019-commonsense |
13 |
|
0 |
0 |
almost 4 years ago |
0 |
|
5 |
apache-2.0 |
Python |
| Source code for the paper "Attention Is (not) All You Need for Commonsense Reasoning" published at ACL 2019. |
| zhijing-jin/pytorch_RelationExtraction_AttentionBiLSTM |
12 |
|
0 |
0 |
over 6 years ago |
0 |
|
2 |
|
Python |
| Pytorch Implementation of Attention-Based BiLSTM for Relation Extraction ("Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification" ACL 2016 http://www.aclweb.org/anthology/P16-2034) |