| lucidrains/vit-pytorch |
16,298 |
|
0 |
6 |
over 2 years ago |
184 |
November 15, 2023 |
114 |
mit |
Python |
| Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch |
| ozan-oktay/Attention-Gated-Networks |
1,099 |
|
0 |
0 |
over 5 years ago |
0 |
|
0 |
mit |
Python |
| Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation |
| fwang91/residual-attention-network |
547 |
|
0 |
0 |
over 8 years ago |
0 |
|
18 |
|
|
| Residual Attention Network for Image Classification |
| rayleizhu/BiFormer |
288 |
|
0 |
0 |
almost 3 years ago |
0 |
|
1 |
mit |
Python |
| [CVPR 2023] Official code release of our paper "BiFormer: Vision Transformer with Bi-Level Routing Attention" |
| qitianwu/DIFFormer |
204 |
|
0 |
0 |
almost 3 years ago |
0 |
|
2 |
|
Python |
| The official implementation for ICLR23 spotlight paper "DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained Diffusion" |
| suvojit-0x55aa/A2S2K-ResNet |
132 |
|
0 |
0 |
over 3 years ago |
0 |
|
1 |
|
Python |
| A2S2K-ResNet: Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification |
| ChristophReich1996/MaxViT |
123 |
|
0 |
0 |
about 3 years ago |
0 |
|
2 |
mit |
Python |
| PyTorch reimplementation of the paper "MaxViT: Multi-Axis Vision Transformer" [arXiv 2022]. |
| koichiro11/residual-attention-network |
93 |
|
0 |
0 |
almost 8 years ago |
0 |
|
7 |
|
Python |
| PKU-ICST-MIPL/OPAM_TIP2018 |
80 |
|
0 |
0 |
about 7 years ago |
0 |
|
10 |
|
Jupyter Notebook |
| Source code of our TIP 2018 paper "Object-Part Attention Model for Fine-grained Image Classification" |
| zyh-uaiaaaa/Erasing-Attention-Consistency |
57 |
|
0 |
0 |
over 2 years ago |
0 |
|
6 |
|
Python |
| Official implementation of the ECCV2022 paper: Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition |