| stanfordnlp/mac-network |
445 |
|
0 |
0 |
about 5 years ago |
0 |
|
9 |
apache-2.0 |
Python |
| Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018) |
| clvrai/Relation-Network-Tensorflow |
326 |
|
0 |
0 |
over 7 years ago |
0 |
|
2 |
mit |
Python |
| Tensorflow implementations of Relational Networks and a VQA dataset named Sort-of-CLEVR proposed by DeepMind. |
| vacancy/NSCL-PyTorch-Release |
209 |
|
0 |
0 |
almost 7 years ago |
0 |
|
7 |
mit |
Python |
| PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL). |
| facebookresearch/grid-feats-vqa |
192 |
|
0 |
0 |
over 4 years ago |
0 |
|
6 |
apache-2.0 |
Python |
| Grid features pre-training code for visual question answering |
| xinke-wang/Awesome-Text-VQA |
140 |
|
0 |
0 |
about 3 years ago |
0 |
|
1 |
|
|
| YunseokJANG/tgif-qa |
139 |
|
0 |
0 |
over 4 years ago |
0 |
|
0 |
|
Python |
| Repository for our CVPR 2017 and IJCV: TGIF-QA |
| abachaa/Existing-Medical-QA-Datasets |
135 |
|
0 |
0 |
over 2 years ago |
0 |
|
0 |
|
|
| Multimodal Question Answering in the Medical Domain: A summary of Existing Datasets and Systems |
| vztu/VIDEVAL |
75 |
|
0 |
0 |
over 4 years ago |
0 |
|
0 |
mit |
MATLAB |
| [IEEE TIP'2021] "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content", Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik |
| vztu/BVQA_Benchmark |
72 |
|
0 |
0 |
about 4 years ago |
0 |
|
0 |
mit |
Python |
| A resource list and performance benchmark for blind video quality assessment (BVQA) models on user-generated content (UGC) datasets. [IEEE TIP'2021] "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content", Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik |
| chingyaoc/VQG-tensorflow |
64 |
|
0 |
0 |
over 7 years ago |
0 |
|
2 |
|
Python |
| Visual Question Generation in Tensorflow |