| jessevig/bertviz |
5,547 |
|
0 |
3 |
over 2 years ago |
5 |
April 02, 2022 |
8 |
apache-2.0 |
Python |
| BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.) |
| hila-chefer/Transformer-Explainability |
1,596 |
|
0 |
0 |
about 2 years ago |
0 |
|
9 |
mit |
Jupyter Notebook |
| [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks. |
| kaushalshetty/Structured-Self-Attention |
412 |
|
0 |
0 |
over 6 years ago |
0 |
|
5 |
mit |
Python |
| A Structured Self-attentive Sentence Embedding |
| mlpotter/Transformer_Time_Series |
331 |
|
0 |
0 |
about 3 years ago |
0 |
|
2 |
|
Jupyter Notebook |
| Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019) |
| keisen/tf-keras-vis |
286 |
|
0 |
2 |
over 2 years ago |
30 |
October 06, 2023 |
32 |
mit |
Python |
| Neural network visualization toolkit for tf.keras |
| cbaziotis/neat-vision |
175 |
|
0 |
0 |
almost 8 years ago |
0 |
|
3 |
mit |
Vue |
| Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic) |
| rentainhe/visualization |
142 |
|
0 |
0 |
about 4 years ago |
9 |
October 04, 2021 |
3 |
mit |
Python |
| a collection of visualization function |
| triplemeng/hierarchical-attention-model |
87 |
|
0 |
0 |
over 8 years ago |
0 |
|
0 |
|
HTML |
| hierarchical attention model |
| SIDN-IAP/attnvis |
87 |
|
0 |
0 |
almost 6 years ago |
0 |
|
3 |
apache-2.0 |
HTML |
| Minimal Interactive Attention Visualization |
| wbw520/scouter |
77 |
|
0 |
0 |
about 4 years ago |
0 |
|
3 |
|
Python |
| SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition |