| facebookresearch/consistent_depth |
1,352 |
|
0 |
0 |
over 3 years ago |
0 |
|
47 |
mit |
Python |
| We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video. |
| yiranran/Audio-driven-TalkingFace-HeadPose |
645 |
|
0 |
0 |
over 2 years ago |
0 |
|
59 |
|
Python |
| Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" and "Predicting Personalized Head Movement From Short Video and Speech Signal" |
| thmoa/videoavatars |
379 |
|
0 |
0 |
almost 6 years ago |
0 |
|
0 |
|
Python |
| This repository contains code corresponding to the paper Video based reconstruction of 3D people models. |
| facebookresearch/DeepFovea |
364 |
|
0 |
0 |
over 4 years ago |
0 |
|
1 |
other |
PureBasic |
| Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos |
| akanazawa/motion_reconstruction |
258 |
|
0 |
0 |
over 3 years ago |
0 |
|
11 |
bsd-3-clause |
Python |
| Motion Reconstruction Code and Data for Skills from Videos (SFV) |
| arielephrat/vid2speech |
102 |
|
0 |
0 |
about 9 years ago |
0 |
|
2 |
|
Python |
| Code for "Vid2speech: Speech Reconstruction from Silent Video" ICASSP '17 |
| michaildoukas/head2head |
90 |
|
0 |
0 |
over 5 years ago |
0 |
|
5 |
mit |
Python |
| PyTorch implementation for Head2Head and Head2Head++. It can be used to fully transfer the head pose, facial expression and eye movements from a source video to a target identity. |
| uzh-rpg/rpg_e2vid |
88 |
|
0 |
0 |
about 6 years ago |
0 |
|
7 |
gpl-3.0 |
Python |
| Code for the paper "High Speed and High Dynamic Range Video with an Event Camera" (T-PAMI, 2019). |
| tencia/video_predict |
75 |
|
0 |
0 |
over 3 years ago |
0 |
|
3 |
mit |
Python |
| LSTM sequence modeling of video data |
| lppllppl920/DenseDescriptorLearning-Pytorch |
59 |
|
0 |
0 |
about 4 years ago |
0 |
|
0 |
gpl-3.0 |
Python |
| Official Repo for the paper "Extremely Dense Point Correspondences using a Learned Feature Descriptor" (CVPR 2020) |