CVPR 2020程式碼開源的論文最全合集

阿木寺發表於2020-06-20

前言

之前Amusi整理了1467篇CVPR 2020所有論文PDF下載資源,詳見:全在這裡了!

CVPR2020-Code

CVPR 2020 論文開源專案合集,同時歡迎各位大佬提交issue,分享CVPR 2020開源專案

關於往年CV頂會論文(如CVPR 2019、ICCV 2019、ECCV 2018)以及其他優質CV論文和大盤點,詳見: https://github.com/amusi/daily-paper-computer-vision

CNN

Exploring Self-attention for Image Recognition

  • 論文:https://hszhao.github.io/papers/cvpr20_san.pdf

  • 程式碼:https://github.com/hszhao/SAN

Improving Convolutional Networks with Self-Calibrated Convolutions

  • 主頁:https://mmcheng.net/scconv/

  • 論文:http://mftp.mmcheng.net/Papers/20cvprSCNet.pdf

  • 程式碼:https://github.com/backseason/SCNet

Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets

  • 論文:https://arxiv.org/abs/2003.13549
  • 程式碼:https://github.com/zeiss-microscopy/BSConv

影像分類

Compositional Convolutional Neural Networks: A Deep Architecture with Innate Robustness to Partial Occlusion

  • 論文:https://arxiv.org/abs/2003.04490

  • 程式碼:https://github.com/AdamKortylewski/CompositionalNets

Spatially Attentive Output Layer for Image Classification

  • 論文:https://arxiv.org/abs/2004.07570

  • 程式碼(好像被原作者刪除了):https://github.com/ildoonet/spatially-attentive-output-layer

目標檢測

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Overcoming_Classifier_Imbalance_for_Long-Tail_Object_Detection_With_Balanced_Group_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/FishYuLi/BalancedGroupSoftmax

AugFPN: Improving Multi-scale Feature Learning for Object Detection

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Guo_AugFPN_Improving_Multi-Scale_Feature_Learning_for_Object_Detection_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/Gus-Guo/AugFPN

Noise-Aware Fully Webly Supervised Object Detection

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Shen_Noise-Aware_Fully_Webly_Supervised_Object_Detection_CVPR_2020_paper.html
  • 程式碼:https://github.com/shenyunhang/NA-fWebSOD/

Learning a Unified Sample Weighting Network for Object Detection

  • 論文:https://arxiv.org/abs/2006.06568
  • 程式碼:https://github.com/caiqi/sample-weighting-network

D2Det: Towards High Quality Object Detection and Instance Segmentation

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Cao_D2Det_Towards_High_Quality_Object_Detection_and_Instance_Segmentation_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/JialeCao001/D2Det

Dynamic Refinement Network for Oriented and Densely Packed Object Detection

  • 論文下載連結:https://arxiv.org/abs/2005.09973

  • 程式碼和資料集:https://github.com/Anymake/DRN_CVPR2020

Scale-Equalizing Pyramid Convolution for Object Detection

論文:https://arxiv.org/abs/2005.03101

程式碼:https://github.com/jshilong/SEPC

Revisiting the Sibling Head in Object Detector

  • 論文:https://arxiv.org/abs/2003.07540

  • 程式碼:https://github.com/Sense-X/TSD

Scale-equalizing Pyramid Convolution for Object Detection

  • 論文:暫無
  • 程式碼:https://github.com/jshilong/SEPC

Detection in Crowded Scenes: One Proposal, Multiple Predictions

  • 論文:https://arxiv.org/abs/2003.09163
  • 程式碼:https://github.com/megvii-model/CrowdDetection

Instance-aware, Context-focused, and Memory-efficient Weakly Supervised Object Detection

  • 論文:https://arxiv.org/abs/2004.04725
  • 程式碼:https://github.com/NVlabs/wetectron

Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection

  • 論文:https://arxiv.org/abs/1912.02424
  • 程式碼:https://github.com/sfzhang15/ATSS

BiDet: An Efficient Binarized Object Detector

  • 論文:https://arxiv.org/abs/2003.03961
  • 程式碼:https://github.com/ZiweiWangTHU/BiDet

Harmonizing Transferability and Discriminability for Adapting Object Detectors

  • 論文:https://arxiv.org/abs/2003.06297
  • 程式碼:https://github.com/chaoqichen/HTCN

CentripetalNet: Pursuing High-quality Keypoint Pairs for Object Detection

  • 論文:https://arxiv.org/abs/2003.09119
  • 程式碼:https://github.com/KiveeDong/CentripetalNet

Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection

  • 論文:https://arxiv.org/abs/2003.11818
  • 程式碼:https://github.com/ggjy/HitDet.pytorch

EfficientDet: Scalable and Efficient Object Detection

  • 論文:https://arxiv.org/abs/1911.09070
  • 程式碼:https://github.com/google/automl/tree/master/efficientdet

3D目標檢測

SESS: Self-Ensembling Semi-Supervised 3D Object Detection

  • 論文: https://arxiv.org/abs/1912.11803

  • 程式碼:https://github.com/Na-Z/sess

Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud Object Detection

  • 論文: https://arxiv.org/abs/2006.04356

  • 程式碼:https://github.com/dleam/Associate-3Ddet

What You See is What You Get: Exploiting Visibility for 3D Object Detection

  • 主頁:https://www.cs.cmu.edu/~peiyunh/wysiwyg/

  • 論文:https://arxiv.org/abs/1912.04986

  • 程式碼:https://github.com/peiyunh/wysiwyg

Learning Depth-Guided Convolutions for Monocular 3D Object Detection

  • 論文:https://arxiv.org/abs/1912.04799
  • 程式碼:https://github.com/dingmyu/D4LCN

Structure Aware Single-stage 3D Object Detection from Point Cloud

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/He_Structure_Aware_Single-Stage_3D_Object_Detection_From_Point_Cloud_CVPR_2020_paper.html

  • 程式碼:https://github.com/skyhehe123/SA-SSD

IDA-3D: Instance-Depth-Aware 3D Object Detection from Stereo Vision for Autonomous Driving

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Peng_IDA-3D_Instance-Depth-Aware_3D_Object_Detection_From_Stereo_Vision_for_Autonomous_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/swords123/IDA-3D

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize

  • 論文:https://arxiv.org/abs/2005.08139

  • 程式碼:https://github.com/cxy1997/3D_adapt_auto_driving

MLCVNet: Multi-Level Context VoteNet for 3D Object Detection

  • 論文:https://arxiv.org/abs/2004.05679
  • 程式碼:https://github.com/NUAAXQ/MLCVNet

3DSSD: Point-based 3D Single Stage Object Detector

  • CVPR 2020 Oral

  • 論文:https://arxiv.org/abs/2002.10187

  • 程式碼:https://github.com/tomztyang/3DSSD

Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation

  • 論文:https://arxiv.org/abs/2004.03572

  • 程式碼:https://github.com/zju3dv/disprcn

End-to-End Pseudo-LiDAR for Image-Based 3D Object Detection

  • 論文:https://arxiv.org/abs/2004.03080

  • 程式碼:https://github.com/mileyan/pseudo-LiDAR_e2e

DSGN: Deep Stereo Geometry Network for 3D Object Detection

  • 論文:https://arxiv.org/abs/2001.03398
  • 程式碼:https://github.com/chenyilun95/DSGN

LiDAR-based Online 3D Video Object Detection with Graph-based Message Passing and Spatiotemporal Transformer Attention

  • 論文:https://arxiv.org/abs/2004.01389
  • 程式碼:https://github.com/yinjunbo/3DVID

PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection

  • 論文:https://arxiv.org/abs/1912.13192

  • 程式碼:https://github.com/sshaoshuai/PV-RCNN

Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud

  • 論文:https://arxiv.org/abs/2003.01251
  • 程式碼:https://github.com/WeijingShi/Point-GNN

視訊目標檢測

Memory Enhanced Global-Local Aggregation for Video Object Detection

論文:https://arxiv.org/abs/2003.12063

程式碼:https://github.com/Scalsol/mega.pytorch

目標跟蹤

SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking

  • 論文:https://arxiv.org/abs/1911.07241
  • 程式碼:https://github.com/ohhhyeahhh/SiamCAR

D3S – A Discriminative Single Shot Segmentation Tracker

  • 論文:https://arxiv.org/abs/1911.08862
  • 程式碼:https://github.com/alanlukezic/d3s

ROAM: Recurrently Optimizing Tracking Model

  • 論文:https://arxiv.org/abs/1907.12006

  • 程式碼:https://github.com/skyoung/ROAM

Siam R-CNN: Visual Tracking by Re-Detection

  • 主頁:https://www.vision.rwth-aachen.de/page/siamrcnn
  • 論文:https://arxiv.org/abs/1911.12836
  • 論文2:https://www.vision.rwth-aachen.de/media/papers/192/siamrcnn.pdf
  • 程式碼:https://github.com/VisualComputingInstitute/SiamR-CNN

Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises

  • 論文:https://arxiv.org/abs/2003.09595
  • 程式碼:https://github.com/MasterBin-IIAU/CSA

High-Performance Long-Term Tracking with Meta-Updater

  • 論文:https://arxiv.org/abs/2004.00305

  • 程式碼:https://github.com/Daikenan/LTMU

AutoTrack: Towards High-Performance Visual Tracking for UAV with Automatic Spatio-Temporal Regularization

  • 論文:https://arxiv.org/abs/2003.12949

  • 程式碼:https://github.com/vision4robotics/AutoTrack

Probabilistic Regression for Visual Tracking

  • 論文:https://arxiv.org/abs/2003.12565
  • 程式碼:https://github.com/visionml/pytracking

MAST: A Memory-Augmented Self-supervised Tracker

  • 論文:https://arxiv.org/abs/2002.07793
  • 程式碼:https://github.com/zlai0/MAST

Siamese Box Adaptive Network for Visual Tracking

  • 論文:https://arxiv.org/abs/2003.06761
  • 程式碼:https://github.com/hqucv/siamban

多目標跟蹤

3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset

  • 主頁:https://vap.aau.dk/3d-zef/
  • 論文:https://arxiv.org/abs/2006.08466
  • 程式碼:https://bitbucket.org/aauvap/3d-zef/src/master/
  • 資料集:https://motchallenge.net/data/3D-ZeF20

語義分割

Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation

  • 論文:暫無

  • 程式碼:https://github.com/JianqiangWan/Super-BPD

Single-Stage Semantic Segmentation from Image Labels

  • 論文:https://arxiv.org/abs/2005.08104

  • 程式碼:https://github.com/visinf/1-stage-wseg

Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation

  • 論文:https://arxiv.org/abs/2003.00867
  • 程式碼:https://github.com/MyeongJin-Kim/Learning-Texture-Invariant-Representation

MSeg: A Composite Dataset for Multi-domain Semantic Segmentation

  • 論文:http://vladlen.info/papers/MSeg.pdf
  • 程式碼:https://github.com/mseg-dataset/mseg-api

CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement

  • 論文:https://arxiv.org/abs/2005.02551
  • 程式碼:https://github.com/hkchengrex/CascadePSP

Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision

  • Oral
  • 論文:https://arxiv.org/abs/2004.07703
  • 程式碼:https://github.com/feipan664/IntraDA

Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation

  • 論文:https://arxiv.org/abs/2004.04581
  • 程式碼:https://github.com/YudeWang/SEAM

Temporally Distributed Networks for Fast Video Segmentation

  • 論文:https://arxiv.org/abs/2004.01800

  • 程式碼:https://github.com/feinanshan/TDNet

Context Prior for Scene Segmentation

  • 論文:https://arxiv.org/abs/2004.01547

  • 程式碼:https://git.io/ContextPrior

Strip Pooling: Rethinking Spatial Pooling for Scene Parsing

  • 論文:https://arxiv.org/abs/2003.13328

  • 程式碼:https://github.com/Andrew-Qibin/SPNet

Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks

  • 論文:https://arxiv.org/abs/2003.05128
  • 程式碼:https://github.com/shachoi/HANet

Learning Dynamic Routing for Semantic Segmentation

  • 論文:https://arxiv.org/abs/2003.10401

  • 程式碼:https://github.com/yanwei-li/DynamicRouting

例項分割

D2Det: Towards High Quality Object Detection and Instance Segmentation

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Cao_D2Det_Towards_High_Quality_Object_Detection_and_Instance_Segmentation_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/JialeCao001/D2Det

PolarMask: Single Shot Instance Segmentation with Polar Representation

  • 論文:https://arxiv.org/abs/1909.13226
  • 程式碼:https://github.com/xieenze/PolarMask
  • 解讀:https://zhuanlan.zhihu.com/p/84890413

CenterMask : Real-Time Anchor-Free Instance Segmentation

  • 論文:https://arxiv.org/abs/1911.06667
  • 程式碼:https://github.com/youngwanLEE/CenterMask

BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation

  • 論文:https://arxiv.org/abs/2001.00309
  • 程式碼:https://github.com/aim-uofa/AdelaiDet

Deep Snake for Real-Time Instance Segmentation

  • 論文:https://arxiv.org/abs/2001.01629
  • 程式碼:https://github.com/zju3dv/snake

Mask Encoding for Single Shot Instance Segmentation

  • 論文:https://arxiv.org/abs/2003.11712

  • 程式碼:https://github.com/aim-uofa/AdelaiDet

全景分割

Pixel Consensus Voting for Panoptic Segmentation

  • 論文:https://arxiv.org/abs/2004.01849
  • 程式碼:還未公佈

BANet: Bidirectional Aggregation Network with Occlusion Handling for Panoptic Segmentation

論文:https://arxiv.org/abs/2003.14031

程式碼:https://github.com/Mooonside/BANet

視訊目標分割

A Transductive Approach for Video Object Segmentation

  • 論文:https://arxiv.org/abs/2004.07193

  • 程式碼:https://github.com/microsoft/transductive-vos.pytorch

State-Aware Tracker for Real-Time Video Object Segmentation

  • 論文:https://arxiv.org/abs/2003.00482

  • 程式碼:https://github.com/MegviiDetection/video_analyst

Learning Fast and Robust Target Models for Video Object Segmentation

  • 論文:https://arxiv.org/abs/2003.00908
  • 程式碼:https://github.com/andr345/frtm-vos

Learning Video Object Segmentation from Unlabeled Videos

  • 論文:https://arxiv.org/abs/2003.05020
  • 程式碼:https://github.com/carrierlxk/MuG

超畫素分割

Superpixel Segmentation with Fully Convolutional Networks

  • 論文:https://arxiv.org/abs/2003.12929
  • 程式碼:https://github.com/fuy34/superpixel_fcn

NAS

AOWS: Adaptive and optimal network width search with latency constraints

  • 論文:https://arxiv.org/abs/2005.10481
  • 程式碼:https://github.com/bermanmaxim/AOWS

Densely Connected Search Space for More Flexible Neural Architecture Search

  • 論文:https://arxiv.org/abs/1906.09607

  • 程式碼:https://github.com/JaminFong/DenseNAS

MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning

  • 論文:https://arxiv.org/abs/2003.14058

  • 程式碼:https://github.com/bhpfelix/MTLNAS

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

  • 論文下載連結:https://arxiv.org/abs/2004.05565

  • 程式碼:https://github.com/facebookresearch/mobile-vision

Neural Architecture Search for Lightweight Non-Local Networks

  • 論文:https://arxiv.org/abs/2004.01961
  • 程式碼:https://github.com/LiYingwei/AutoNL

Rethinking Performance Estimation in Neural Architecture Search

  • 論文:https://arxiv.org/abs/2005.09917
  • 程式碼:https://github.com/zhengxiawu/rethinking_performance_estimation_in_NAS
  • 解讀1:https://www.zhihu.com/question/372070853/answer/1035234510
  • 解讀2:https://zhuanlan.zhihu.com/p/111167409

CARS: Continuous Evolution for Efficient Neural Architecture Search

  • 論文:https://arxiv.org/abs/1909.04977
  • 程式碼(即將開源):https://github.com/huawei-noah/CARS

GAN

Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning

  • 論文:https://arxiv.org/abs/1912.01899
  • 程式碼:https://github.com/SsGood/DBGAN

PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer

  • 論文:https://arxiv.org/abs/1909.06956
  • 程式碼:https://github.com/wtjiang98/PSGAN

Semantically Mutil-modal Image Synthesis

  • 主頁:http://seanseattle.github.io/SMIS
  • 論文:https://arxiv.org/abs/2003.12697
  • 程式碼:https://github.com/Seanseattle/SMIS

Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping

  • 論文:https://yiranran.github.io/files/CVPR2020_Unpaired%20Portrait%20Drawing%20Generation%20via%20Asymmetric%20Cycle%20Mapping.pdf
  • 程式碼:https://github.com/yiranran/Unpaired-Portrait-Drawing

Learning to Cartoonize Using White-box Cartoon Representations

  • 論文:https://github.com/SystemErrorWang/White-box-Cartoonization/blob/master/paper/06791.pdf

  • 主頁:https://systemerrorwang.github.io/White-box-Cartoonization/

  • 程式碼:https://github.com/SystemErrorWang/White-box-Cartoonization

  • 解讀:https://zhuanlan.zhihu.com/p/117422157

  • Demo視訊:https://www.bilibili.com/video/av56708333

GAN Compression: Efficient Architectures for Interactive Conditional GANs

  • 論文:https://arxiv.org/abs/2003.08936

  • 程式碼:https://github.com/mit-han-lab/gan-compression

Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions

  • 論文:https://arxiv.org/abs/2003.01826
  • 程式碼:https://github.com/cc-hpc-itwm/UpConv

Re-ID

COCAS: A Large-Scale Clothes Changing Person Dataset for Re-identification

  • 論文:https://arxiv.org/abs/2005.07862

  • 資料集:暫無

Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking

  • 論文:https://arxiv.org/abs/2004.04199

  • 程式碼:https://github.com/whj363636/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking

Pose-guided Visible Part Matching for Occluded Person ReID

  • 論文:https://arxiv.org/abs/2004.00230
  • 程式碼:https://github.com/hh23333/PVPM

Weakly supervised discriminative feature learning with state information for person identification

  • 論文:https://arxiv.org/abs/2002.11939
  • 程式碼:https://github.com/KovenYu/state-information

3D點雲(分類/分割/配準等)

3D點雲卷積

PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling

  • 論文:https://arxiv.org/abs/2003.00492
  • 程式碼:https://github.com/yanx27/PointASNL

Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds

  • 論文下載連結:https://arxiv.org/abs/2003.12971

  • 程式碼:https://github.com/raoyongming/PointGLR

Grid-GCN for Fast and Scalable Point Cloud Learning

  • 論文:https://arxiv.org/abs/1912.02984

  • 程式碼:https://github.com/Xharlie/Grid-GCN

FPConv: Learning Local Flattening for Point Convolution

  • 論文:https://arxiv.org/abs/2002.10701
  • 程式碼:https://github.com/lyqun/FPConv

3D點雲分類

PointAugment: an Auto-Augmentation Framework for Point Cloud Classification

  • 論文:https://arxiv.org/abs/2002.10876
  • 程式碼(即將開源): https://github.com/liruihui/PointAugment/

3D點雲語義分割

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds

  • 論文:https://arxiv.org/abs/1911.11236

  • 程式碼:https://github.com/QingyongHu/RandLA-Net

  • 解讀:https://zhuanlan.zhihu.com/p/105433460

Weakly Supervised Semantic Point Cloud Segmentation:Towards 10X Fewer Labels

  • 論文:https://arxiv.org/abs/2004.0409

  • 程式碼:https://github.com/alex-xun-xu/WeakSupPointCloudSeg

PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation

  • 論文:https://arxiv.org/abs/2003.14032
  • 程式碼:https://github.com/edwardzhou130/PolarSeg

Learning to Segment 3D Point Clouds in 2D Image Space

  • 論文:https://arxiv.org/abs/2003.05593

  • 程式碼:https://github.com/WPI-VISLab/Learning-to-Segment-3D-Point-Clouds-in-2D-Image-Space

3D點雲例項分割

PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation

  • 論文:https://arxiv.org/abs/2004.01658
  • 程式碼:https://github.com/Jia-Research-Lab/PointGroup

3D點雲配準

D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features

  • 論文:https://arxiv.org/abs/2003.03164
  • 程式碼:https://github.com/XuyangBai/D3Feat

RPM-Net: Robust Point Matching using Learned Features

  • 論文:https://arxiv.org/abs/2003.13479
  • 程式碼:https://github.com/yewzijian/RPMNet

3D點雲補全

Cascaded Refinement Network for Point Cloud Completion

  • 論文:https://arxiv.org/abs/2004.03327
  • 程式碼:https://github.com/xiaogangw/cascaded-point-completion

3D點雲目標跟蹤

P2B: Point-to-Box Network for 3D Object Tracking in Point Clouds

  • 論文:https://arxiv.org/abs/2005.13888
  • 程式碼:https://github.com/HaozheQi/P2B

其他

An Efficient PointLSTM for Point Clouds Based Gesture Recognition

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Min_An_Efficient_PointLSTM_for_Point_Clouds_Based_Gesture_Recognition_CVPR_2020_paper.html
  • 程式碼:https://github.com/Blueprintf/pointlstm-gesture-recognition-pytorch

人臉

人臉識別

CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition

  • 論文:https://arxiv.org/abs/2004.00288

  • 程式碼:https://github.com/HuangYG123/CurricularFace

Learning Meta Face Recognition in Unseen Domains

  • 論文:https://arxiv.org/abs/2003.07733
  • 程式碼:https://github.com/cleardusk/MFR
  • 解讀:https://mp.weixin.qq.com/s/YZoEnjpnlvb90qSI3xdJqQ

人臉檢測

人臉活體檢測

Searching Central Difference Convolutional Networks for Face Anti-Spoofing

  • 論文:https://arxiv.org/abs/2003.04092

  • 程式碼:https://github.com/ZitongYu/CDCN

人臉表情識別

Suppressing Uncertainties for Large-Scale Facial Expression Recognition

  • 論文:https://arxiv.org/abs/2002.10392

  • 程式碼(即將開源):https://github.com/kaiwang960112/Self-Cure-Network

人臉轉正

Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images

  • 論文:https://arxiv.org/abs/2003.08124
  • 程式碼:https://github.com/Hangz-nju-cuhk/Rotate-and-Render

人臉3D重建

AvatarMe: Realistically Renderable 3D Facial Reconstruction "in-the-wild"

  • 論文:https://arxiv.org/abs/2003.13845
  • 資料集:https://github.com/lattas/AvatarMe

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction

  • 論文:https://arxiv.org/abs/2003.13989
  • 程式碼:https://github.com/zhuhao-nju/facescape

人體姿態估計(2D/3D)

2D人體姿態估計

HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation

  • 論文:https://arxiv.org/abs/1908.10357
  • 程式碼:https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation

The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation

  • 論文:https://arxiv.org/abs/1911.07524
  • 程式碼:https://github.com/HuangJunJie2017/UDP-Pose
  • 解讀:https://zhuanlan.zhihu.com/p/92525039

Distribution-Aware Coordinate Representation for Human Pose Estimation

  • 主頁:https://ilovepose.github.io/coco/

  • 論文:https://arxiv.org/abs/1910.06278

  • 程式碼:https://github.com/ilovepose/DarkPose

3D人體姿態估計

Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach

  • 主頁:https://www.zhe-zhang.com/cvpr2020

  • 論文:https://arxiv.org/abs/2003.11163

  • 程式碼:https://github.com/CHUNYUWANG/imu-human-pose-pytorch

Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image using Synthetic Data

  • 論文下載連結:https://arxiv.org/abs/2004.01166

  • 程式碼:https://github.com/Healthcare-Robotics/bodies-at-rest

  • 資料集:https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/KOA4ML

Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image Synthesis

  • 主頁:http://val.cds.iisc.ac.in/pgp-human/
  • 論文:https://arxiv.org/abs/2004.04400

Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation

  • 論文:https://arxiv.org/abs/2004.00329
  • 程式碼:https://github.com/fabbrimatteo/LoCO

VIBE: Video Inference for Human Body Pose and Shape Estimation

  • 論文:https://arxiv.org/abs/1912.05656
  • 程式碼:https://github.com/mkocabas/VIBE

Back to the Future: Joint Aware Temporal Deep Learning 3D Human Pose Estimation

  • 論文:https://arxiv.org/abs/2002.11251
  • 程式碼:https://github.com/vnmr/JointVideoPose3D

Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS

  • 論文:https://arxiv.org/abs/2003.03972
  • 資料集:暫無

人體解析

Correlating Edge, Pose with Parsing

  • 論文:https://arxiv.org/abs/2005.01431

  • 程式碼:https://github.com/ziwei-zh/CorrPM

場景文字檢測

ContourNet: Taking a Further Step Toward Accurate Arbitrary-Shaped Scene Text Detection

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_ContourNet_Taking_a_Further_Step_Toward_Accurate_Arbitrary-Shaped_Scene_Text_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/wangyuxin87/ContourNet

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

  • 論文:https://arxiv.org/abs/2003.10608
  • 程式碼和資料集:https://github.com/Jyouhou/UnrealText/

ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network

  • 論文:https://arxiv.org/abs/2002.10200
  • 程式碼(即將開源):https://github.com/Yuliang-Liu/bezier_curve_text_spotting
  • 程式碼(即將開源):https://github.com/aim-uofa/adet

Deep Relational Reasoning Graph Network for Arbitrary Shape Text Detection

  • 論文:https://arxiv.org/abs/2003.07493

  • 程式碼:https://github.com/GXYM/DRRG

場景文字識別

SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition

  • 論文:https://arxiv.org/abs/2005.10977
  • 程式碼:https://github.com/Pay20Y/SEED

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

  • 論文:https://arxiv.org/abs/2003.10608
  • 程式碼和資料集:https://github.com/Jyouhou/UnrealText/

ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network

  • 論文:https://arxiv.org/abs/2002.10200
  • 程式碼(即將開源):https://github.com/aim-uofa/adet

Learn to Augment: Joint Data Augmentation and Network Optimization for Text Recognition

  • 論文:https://arxiv.org/abs/2003.06606

  • 程式碼:https://github.com/Canjie-Luo/Text-Image-Augmentation

特徵(點)檢測和描述

SuperGlue: Learning Feature Matching with Graph Neural Networks

  • 論文:https://arxiv.org/abs/1911.11763
  • 程式碼:https://github.com/magicleap/SuperGluePretrainedNetwork

超解析度

影像超解析度

Closed-Loop Matters: Dual Regression Networks for Single Image Super-Resolution

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Closed-Loop_Matters_Dual_Regression_Networks_for_Single_Image_Super-Resolution_CVPR_2020_paper.html
  • 程式碼:https://github.com/guoyongcs/DRN

Learning Texture Transformer Network for Image Super-Resolution

  • 論文:https://arxiv.org/abs/2006.04139

  • 程式碼:https://github.com/FuzhiYang/TTSR

Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining

  • 論文:https://arxiv.org/abs/2006.01424
  • 程式碼:https://github.com/SHI-Labs/Cross-Scale-Non-Local-Attention

Structure-Preserving Super Resolution with Gradient Guidance

  • 論文:https://arxiv.org/abs/2003.13081

  • 程式碼:https://github.com/Maclory/SPSR

Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy

論文:https://arxiv.org/abs/2004.00448

程式碼:https://github.com/clovaai/cutblur

視訊超解析度

TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution

  • 論文:https://arxiv.org/abs/1812.02898
  • 程式碼:https://github.com/YapengTian/TDAN-VSR-CVPR-2020

Space-Time-Aware Multi-Resolution Video Enhancement

  • 主頁:https://alterzero.github.io/projects/STAR.html
  • 論文:http://arxiv.org/abs/2003.13170
  • 程式碼:https://github.com/alterzero/STARnet

Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution

  • 論文:https://arxiv.org/abs/2002.11616
  • 程式碼:https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020

模型壓縮/剪枝

DMCP: Differentiable Markov Channel Pruning for Neural Networks

  • 論文:https://arxiv.org/abs/2005.03354
  • 程式碼:https://github.com/zx55/dmcp

Forward and Backward Information Retention for Accurate Binary Neural Networks

  • 論文:https://arxiv.org/abs/1909.10788

  • 程式碼:https://github.com/htqin/IR-Net

Towards Efficient Model Compression via Learned Global Ranking

  • 論文:https://arxiv.org/abs/1904.12368
  • 程式碼:https://github.com/cmu-enyac/LeGR

HRank: Filter Pruning using High-Rank Feature Map

  • 論文:http://arxiv.org/abs/2002.10179
  • 程式碼:https://github.com/lmbxmu/HRank

GAN Compression: Efficient Architectures for Interactive Conditional GANs

  • 論文:https://arxiv.org/abs/2003.08936

  • 程式碼:https://github.com/mit-han-lab/gan-compression

Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression

  • 論文:https://arxiv.org/abs/2003.08935

  • 程式碼:https://github.com/ofsoundof/group_sparsity

視訊理解/行為識別

Oops! Predicting Unintentional Action in Video

  • 主頁:https://oops.cs.columbia.edu/

  • 論文:https://arxiv.org/abs/1911.11206

  • 程式碼:https://github.com/cvlab-columbia/oops

  • 資料集:https://oops.cs.columbia.edu/data

PREDICT & CLUSTER: Unsupervised Skeleton Based Action Recognition

  • 論文:https://arxiv.org/abs/1911.12409
  • 程式碼:https://github.com/shlizee/Predict-Cluster

Intra- and Inter-Action Understanding via Temporal Action Parsing

  • 論文:https://arxiv.org/abs/2005.10229
  • 主頁和資料集:https://sdolivia.github.io/TAPOS/

3DV: 3D Dynamic Voxel for Action Recognition in Depth Video

  • 論文:https://arxiv.org/abs/2005.05501
  • 程式碼:https://github.com/3huo/3DV-Action

FineGym: A Hierarchical Video Dataset for Fine-grained Action Understanding

  • 主頁:https://sdolivia.github.io/FineGym/
  • 論文:https://arxiv.org/abs/2004.06704

TEA: Temporal Excitation and Aggregation for Action Recognition

  • 論文:https://arxiv.org/abs/2004.01398

  • 程式碼:https://github.com/Phoenix1327/tea-action-recognition

X3D: Expanding Architectures for Efficient Video Recognition

  • 論文:https://arxiv.org/abs/2004.04730

  • 程式碼:https://github.com/facebookresearch/SlowFast

Temporal Pyramid Network for Action Recognition

  • 主頁:https://decisionforce.github.io/TPN

  • 論文:https://arxiv.org/abs/2004.03548

  • 程式碼:https://github.com/decisionforce/TPN

基於骨架的動作識別

Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition

  • 論文:https://arxiv.org/abs/2003.14111
  • 程式碼:https://github.com/kenziyuliu/ms-g3d

人群計數

深度估計

BiFuse: Monocular 360◦ Depth Estimation via Bi-Projection Fusion

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_BiFuse_Monocular_360_Depth_Estimation_via_Bi-Projection_Fusion_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/Yeh-yu-hsuan/BiFuse

Focus on defocus: bridging the synthetic to real domain gap for depth estimation

  • 論文:https://arxiv.org/abs/2005.09623
  • 程式碼:https://github.com/dvl-tum/defocus-net

Bi3D: Stereo Depth Estimation via Binary Classifications

  • 論文:https://arxiv.org/abs/2005.07274

  • 程式碼:https://github.com/NVlabs/Bi3D

AANet: Adaptive Aggregation Network for Efficient Stereo Matching

  • 論文:https://arxiv.org/abs/2004.09548
  • 程式碼:https://github.com/haofeixu/aanet

Towards Better Generalization: Joint Depth-Pose Learning without PoseNet

  • 論文:https://github.com/B1ueber2y/TrianFlow

  • 程式碼:https://github.com/B1ueber2y/TrianFlow

單目深度估計

On the uncertainty of self-supervised monocular depth estimation

  • 論文:https://arxiv.org/abs/2005.06209
  • 程式碼:https://github.com/mattpoggi/mono-uncertainty

3D Packing for Self-Supervised Monocular Depth Estimation

  • 論文:https://arxiv.org/abs/1905.02693
  • 程式碼:https://github.com/TRI-ML/packnet-sfm
  • Demo視訊:https://www.bilibili.com/video/av70562892/

Domain Decluttering: Simplifying Images to Mitigate Synthetic-Real Domain Shift and Improve Depth Estimation

  • 論文:https://arxiv.org/abs/2002.12114
  • 程式碼:https://github.com/yzhao520/ARC

6D目標姿態估計

MoreFusion: Multi-object Reasoning for 6D Pose Estimation from Volumetric Fusion

  • 論文:https://arxiv.org/abs/2004.04336
  • 程式碼:https://github.com/wkentaro/morefusion

EPOS: Estimating 6D Pose of Objects with Symmetries

主頁:http://cmp.felk.cvut.cz/epos

論文:https://arxiv.org/abs/2004.00605

G2L-Net: Global to Local Network for Real-time 6D Pose Estimation with Embedding Vector Features

  • 論文:https://arxiv.org/abs/2003.11089

  • 程式碼:https://github.com/DC1991/G2L_Net

手勢估計

HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation

  • 論文:https://arxiv.org/abs/2004.00060

  • 主頁:http://vision.sice.indiana.edu/projects/hopenet

Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data

  • 論文:https://arxiv.org/abs/2003.09572

  • 程式碼:https://github.com/CalciferZh/minimal-hand

顯著性檢測

JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection

  • 論文:https://arxiv.org/abs/2004.08515

  • 程式碼:https://github.com/kerenfu/JLDCF/

UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders

  • 主頁:http://dpfan.net/d3netbenchmark/

  • 論文:https://arxiv.org/abs/2004.05763

  • 程式碼:https://github.com/JingZhang617/UCNet

去噪

A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising

  • 論文:https://arxiv.org/abs/2003.12751

  • 程式碼:https://github.com/Vandermode/NoiseModel

CycleISP: Real Image Restoration via Improved Data Synthesis

  • 論文:https://arxiv.org/abs/2003.07761

  • 程式碼:https://github.com/swz30/CycleISP

去雨

Multi-Scale Progressive Fusion Network for Single Image Deraining

  • 論文:https://arxiv.org/abs/2003.10985

  • 程式碼:https://github.com/kuihua/MSPFN

去模糊

視訊去模糊

Cascaded Deep Video Deblurring Using Temporal Sharpness Prior

  • 主頁:https://csbhr.github.io/projects/cdvd-tsp/index.html
  • 論文:https://arxiv.org/abs/2004.02501
  • 程式碼:https://github.com/csbhr/CDVD-TSP

去霧

Multi-Scale Boosted Dehazing Network with Dense Feature Fusion

  • 論文:https://arxiv.org/abs/2004.13388

  • 程式碼:https://github.com/BookerDeWitt/MSBDN-DFF

特徵點檢測與描述

ASLFeat: Learning Local Features of Accurate Shape and Localization

  • 論文:https://arxiv.org/abs/2003.10071

  • 程式碼:https://github.com/lzx551402/aslfeat

視覺問答(VQA)

VC R-CNN:Visual Commonsense R-CNN

  • 論文:https://arxiv.org/abs/2002.12204
  • 程式碼:https://github.com/Wangt-CN/VC-R-CNN

視訊問答(VideoQA)

Hierarchical Conditional Relation Networks for Video Question Answering

  • 論文:https://arxiv.org/abs/2002.10698
  • 程式碼:https://github.com/thaolmk54/hcrn-videoqa

視覺語言導航

Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training

  • 論文:https://arxiv.org/abs/2002.10638
  • 程式碼(即將開源):https://github.com/weituo12321/PREVALENT

視訊壓縮

Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement

  • 論文:https://arxiv.org/abs/2003.01966
  • 程式碼:https://github.com/RenYang-home/HLVC

視訊插幀

FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Gui_FeatureFlow_Robust_Video_Interpolation_via_Structure-to-Texture_Generation_CVPR_2020_paper.html

  • 程式碼:https://github.com/CM-BF/FeatureFlow

Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution

  • 論文:https://arxiv.org/abs/2002.11616
  • 程式碼:https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020

Space-Time-Aware Multi-Resolution Video Enhancement

  • 主頁:https://alterzero.github.io/projects/STAR.html
  • 論文:http://arxiv.org/abs/2003.13170
  • 程式碼:https://github.com/alterzero/STARnet

Scene-Adaptive Video Frame Interpolation via Meta-Learning

  • 論文:https://arxiv.org/abs/2004.00779
  • 程式碼:https://github.com/myungsub/meta-interpolation

Softmax Splatting for Video Frame Interpolation

  • 主頁:http://sniklaus.com/papers/softsplat
  • 論文:https://arxiv.org/abs/2003.05534
  • 程式碼:https://github.com/sniklaus/softmax-splatting

風格遷移

Diversified Arbitrary Style Transfer via Deep Feature Perturbation

  • 論文:https://arxiv.org/abs/1909.08223
  • 程式碼:https://github.com/EndyWon/Deep-Feature-Perturbation

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

  • 論文:https://arxiv.org/abs/2003.08436

  • 程式碼:https://github.com/mingsun-tse/collaborative-distillation

車道線檢測

Inter-Region Affinity Distillation for Road Marking Segmentation

  • 論文:https://arxiv.org/abs/2004.05304
  • 程式碼:https://github.com/cardwing/Codes-for-IntRA-KD

"人-物"互動(HOT)檢測

PPDM: Parallel Point Detection and Matching for Real-time Human-Object Interaction Detection

  • 論文:https://arxiv.org/abs/1912.12898
  • 程式碼:https://github.com/YueLiao/PPDM

Detailed 2D-3D Joint Representation for Human-Object Interaction

  • 論文:https://arxiv.org/abs/2004.08154

  • 程式碼:https://github.com/DirtyHarryLYL/DJ-RN

Cascaded Human-Object Interaction Recognition

  • 論文:https://arxiv.org/abs/2003.04262

  • 程式碼:https://github.com/tfzhou/C-HOI

VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions

  • 論文:https://arxiv.org/abs/2003.05541
  • 程式碼:https://github.com/ASMIftekhar/VSGNet

軌跡預測

The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction

  • 論文:https://arxiv.org/abs/1912.06445
  • 程式碼:https://github.com/JunweiLiang/Multiverse
  • 資料集:https://next.cs.cmu.edu/multiverse/

Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction

  • 論文:https://arxiv.org/abs/2002.11927
  • 程式碼:https://github.com/abduallahmohamed/Social-STGCNN

運動預測

Collaborative Motion Prediction via Neural Motion Message Passing

  • 論文:https://arxiv.org/abs/2003.06594
  • 程式碼:https://github.com/PhyllisH/NMMP

MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps

  • 論文:https://arxiv.org/abs/2003.06754

  • 程式碼:https://github.com/pxiangwu/MotionNet

光流估計

Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation

  • 論文:https://arxiv.org/abs/2003.13045
  • 程式碼:https://github.com/lliuz/ARFlow

影像檢索

Evade Deep Image Retrieval by Stashing Private Images in the Hash Space

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Xiao_Evade_Deep_Image_Retrieval_by_Stashing_Private_Images_in_the_CVPR_2020_paper.html
  • 程式碼:https://github.com/sugarruy/hashstash

虛擬試衣

Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content

  • 論文:https://arxiv.org/abs/2003.05863
  • 程式碼:https://github.com/switchablenorms/DeepFashion_Try_On

HDR

Single-Image HDR Reconstruction by Learning to Reverse the Camera Pipeline

  • 主頁:https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR

  • 論文下載連結:https://www.cmlab.csie.ntu.edu.tw/~yulunliu/SingleHDR_/00942.pdf

  • 程式碼:https://github.com/alex04072000/SingleHDR

對抗樣本

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

  • 論文:https://arxiv.org/abs/1911.02466
  • 程式碼:https://github.com/ZhengyuZhao/PerC-Adversarial

三維重建

Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild

  • CVPR 2020 Best Paper
  • 主頁:https://elliottwu.com/projects/unsup3d/
  • 論文:https://arxiv.org/abs/1911.11130
  • 程式碼:https://github.com/elliottwu/unsup3d

Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

  • 主頁:https://shunsukesaito.github.io/PIFuHD/

  • 論文:https://arxiv.org/abs/2004.00452

  • 程式碼:https://github.com/facebookresearch/pifuhd

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Patel_TailorNet_Predicting_Clothing_in_3D_as_a_Function_of_Human_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/chaitanya100100/TailorNet

  • 資料集:https://github.com/zycliao/TailorNet_dataset

Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Chibane_Implicit_Functions_in_Feature_Space_for_3D_Shape_Reconstruction_and_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/jchibane/if-net

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Mir_Learning_to_Transfer_Texture_From_Clothing_Images_to_3D_Humans_CVPR_2020_paper.pdf

  • 程式碼:https://github.com/aymenmir1/pix2surf

深度補全

Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End

論文:https://arxiv.org/abs/2006.03349

程式碼:https://github.com/abdo-eldesokey/pncnn

語義場景補全

3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior

  • 論文:https://arxiv.org/abs/2003.14052
  • 程式碼:https://github.com/charlesCXK/3D-SketchAware-SSC

影像/視訊描述

Syntax-Aware Action Targeting for Video Captioning

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Zheng_Syntax-Aware_Action_Targeting_for_Video_Captioning_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/SydCaption/SAAT

線框解析

Holistically-Attracted Wireframe Parser

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Xue_Holistically-Attracted_Wireframe_Parsing_CVPR_2020_paper.html

  • 程式碼:https://github.com/cherubicXN/hawp

資料集

3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset

  • 主頁:https://vap.aau.dk/3d-zef/
  • 論文:https://arxiv.org/abs/2006.08466
  • 程式碼:https://bitbucket.org/aauvap/3d-zef/src/master/
  • 資料集:https://motchallenge.net/data/3D-ZeF20

TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/papers/Patel_TailorNet_Predicting_Clothing_in_3D_as_a_Function_of_Human_CVPR_2020_paper.pdf
  • 程式碼:https://github.com/chaitanya100100/TailorNet
  • 資料集:https://github.com/zycliao/TailorNet_dataset

Oops! Predicting Unintentional Action in Video

  • 主頁:https://oops.cs.columbia.edu/

  • 論文:https://arxiv.org/abs/1911.11206

  • 程式碼:https://github.com/cvlab-columbia/oops

  • 資料集:https://oops.cs.columbia.edu/data

The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction

  • 論文:https://arxiv.org/abs/1912.06445
  • 程式碼:https://github.com/JunweiLiang/Multiverse
  • 資料集:https://next.cs.cmu.edu/multiverse/

Open Compound Domain Adaptation

  • 主頁:https://liuziwei7.github.io/projects/CompoundDomain.html
  • 資料集:https://drive.google.com/drive/folders/1_uNTF8RdvhS_sqVTnYx17hEOQpefmE2r?usp=sharing
  • 論文:https://arxiv.org/abs/1909.03403
  • 程式碼:https://github.com/zhmiao/OpenCompoundDomainAdaptation-OCDA

Intra- and Inter-Action Understanding via Temporal Action Parsing

  • 論文:https://arxiv.org/abs/2005.10229
  • 主頁和資料集:https://sdolivia.github.io/TAPOS/

Dynamic Refinement Network for Oriented and Densely Packed Object Detection

  • 論文下載連結:https://arxiv.org/abs/2005.09973

  • 程式碼和資料集:https://github.com/Anymake/DRN_CVPR2020

COCAS: A Large-Scale Clothes Changing Person Dataset for Re-identification

  • 論文:https://arxiv.org/abs/2005.07862

  • 資料集:暫無

KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations

  • 論文:https://arxiv.org/abs/2002.12687

  • 資料集:https://github.com/qq456cvb/KeypointNet

MSeg: A Composite Dataset for Multi-domain Semantic Segmentation

  • 論文:http://vladlen.info/papers/MSeg.pdf
  • 程式碼:https://github.com/mseg-dataset/mseg-api
  • 資料集:https://github.com/mseg-dataset/mseg-semantic

AvatarMe: Realistically Renderable 3D Facial Reconstruction "in-the-wild"

  • 論文:https://arxiv.org/abs/2003.13845
  • 資料集:https://github.com/lattas/AvatarMe

Learning to Autofocus

  • 論文:https://arxiv.org/abs/2004.12260
  • 資料集:暫無

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction

  • 論文:https://arxiv.org/abs/2003.13989
  • 程式碼:https://github.com/zhuhao-nju/facescape

Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image using Synthetic Data

  • 論文下載連結:https://arxiv.org/abs/2004.01166

  • 程式碼:https://github.com/Healthcare-Robotics/bodies-at-rest

  • 資料集:https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/KOA4ML

FineGym: A Hierarchical Video Dataset for Fine-grained Action Understanding

  • 主頁:https://sdolivia.github.io/FineGym/
  • 論文:https://arxiv.org/abs/2004.06704

A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

  • 主頁:https://anyirao.com/projects/SceneSeg.html

  • 論文下載連結:https://arxiv.org/abs/2004.02678

  • 程式碼:https://github.com/AnyiRao/SceneSeg

Deep Homography Estimation for Dynamic Scenes

  • 論文:https://arxiv.org/abs/2004.02132

  • 資料集:https://github.com/lcmhoang/hmg-dynamics

Assessing Image Quality Issues for Real-World Problems

  • 主頁:https://vizwiz.org/tasks-and-datasets/image-quality-issues/
  • 論文:https://arxiv.org/abs/2003.12511

UnrealText: Synthesizing Realistic Scene Text Images from the Unreal World

  • 論文:https://arxiv.org/abs/2003.10608
  • 程式碼和資料集:https://github.com/Jyouhou/UnrealText/

PANDA: A Gigapixel-level Human-centric Video Dataset

  • 論文:https://arxiv.org/abs/2003.04852

  • 資料集:http://www.panda-dataset.com/

IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning

  • 論文:https://arxiv.org/abs/2003.02920
  • 資料集:https://github.com/intra3d2019/IntrA

Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS

  • 論文:https://arxiv.org/abs/2003.03972
  • 資料集:暫無

其他

CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus

  • 論文:http://openaccess.thecvf.com/content_CVPR_2020/html/Kluger_CONSAC_Robust_Multi-Model_Fitting_by_Conditional_Sample_Consensus_CVPR_2020_paper.html
  • 程式碼:https://github.com/fkluger/consac

Learning to Learn Single Domain Generalization

  • 論文:https://arxiv.org/abs/2003.13216
  • 程式碼:https://github.com/joffery/M-ADA

Open Compound Domain Adaptation

  • 主頁:https://liuziwei7.github.io/projects/CompoundDomain.html
  • 資料集:https://drive.google.com/drive/folders/1_uNTF8RdvhS_sqVTnYx17hEOQpefmE2r?usp=sharing
  • 論文:https://arxiv.org/abs/1909.03403
  • 程式碼:https://github.com/zhmiao/OpenCompoundDomainAdaptation-OCDA

Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

  • 論文:http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf

  • 程式碼:https://github.com/autonomousvision/differentiable_volumetric_rendering

QEBA: Query-Efficient Boundary-Based Blackbox Attack

  • 論文:https://arxiv.org/abs/2005.14137
  • 程式碼:https://github.com/AI-secure/QEBA

Equalization Loss for Long-Tailed Object Recognition

  • 論文:https://arxiv.org/abs/2003.05176
  • 程式碼:https://github.com/tztztztztz/eql.detectron2

Instance-aware Image Colorization

  • 主頁:https://ericsujw.github.io/InstColorization/
  • 論文:https://arxiv.org/abs/2005.10825
  • 程式碼:https://github.com/ericsujw/InstColorization

Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting

  • 論文:https://arxiv.org/abs/2005.09704

  • 程式碼:https://github.com/Atlas200dk/sample-imageinpainting-HiFill

Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching

  • 論文:https://arxiv.org/abs/2005.03860
  • 程式碼:https://github.com/shiyujiao/cross_view_localization_DSM

Epipolar Transformers

  • 論文:https://arxiv.org/abs/2005.04551

  • 程式碼:https://github.com/yihui-he/epipolar-transformers

Bringing Old Photos Back to Life

  • 主頁:http://raywzy.com/Old_Photo/
  • 論文:https://arxiv.org/abs/2004.09484

MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask

  • 論文:https://arxiv.org/abs/2003.10955

  • 程式碼:https://github.com/microsoft/MaskFlownet

Self-Supervised Viewpoint Learning from Image Collections

  • 論文:https://arxiv.org/abs/2004.01793
  • 論文2:https://research.nvidia.com/sites/default/files/pubs/2020-03_Self-Supervised-Viewpoint-Learning/SSV-CVPR2020.pdf
  • 程式碼:https://github.com/NVlabs/SSV

Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations

  • Oral

  • 論文:https://arxiv.org/abs/2003.12237

  • 程式碼:https://github.com/cuishuhao/BNM

Towards Learning Structure via Consensus for Face Segmentation and Parsing

  • 論文:https://arxiv.org/abs/1911.00957
  • 程式碼:https://github.com/isi-vista/structure_via_consensus

Plug-and-Play Algorithms for Large-scale Snapshot Compressive Imaging

  • Oral

  • 論文:https://arxiv.org/abs/2003.13654

  • 程式碼:https://github.com/liuyang12/PnP-SCI

Lightweight Photometric Stereo for Facial Details Recovery

  • 論文:https://arxiv.org/abs/2003.12307
  • 程式碼:https://github.com/Juyong/FacePSNet

Footprints and Free Space from a Single Color Image

  • 論文:https://arxiv.org/abs/2004.06376

  • 程式碼:https://github.com/nianticlabs/footprints

Self-Supervised Monocular Scene Flow Estimation

  • 論文:https://arxiv.org/abs/2004.04143
  • 程式碼:https://github.com/visinf/self-mono-sf

Quasi-Newton Solver for Robust Non-Rigid Registration

  • 論文:https://arxiv.org/abs/2004.04322
  • 程式碼:https://github.com/Juyong/Fast_RNRR

A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

  • 主頁:https://anyirao.com/projects/SceneSeg.html

  • 論文下載連結:https://arxiv.org/abs/2004.02678

  • 程式碼:https://github.com/AnyiRao/SceneSeg

DeepFLASH: An Efficient Network for Learning-based Medical Image Registration

  • 論文:https://arxiv.org/abs/2004.02097

  • 程式碼:https://github.com/jw4hv/deepflash

Self-Supervised Scene De-occlusion

  • 主頁:https://xiaohangzhan.github.io/projects/deocclusion/
  • 論文:https://arxiv.org/abs/2004.02788
  • 程式碼:https://github.com/XiaohangZhan/deocclusion

Polarized Reflection Removal with Perfect Alignment in the Wild

  • 主頁:https://leichenyang.weebly.com/project-polarized.html
  • 程式碼:https://github.com/ChenyangLEI/CVPR2020-Polarized-Reflection-Removal-with-Perfect-Alignment

Background Matting: The World is Your Green Screen

  • 論文:https://arxiv.org/abs/2004.00626
  • 程式碼:http://github.com/senguptaumd/Background-Matting

What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective

  • 論文:https://arxiv.org/abs/2003.11241

  • 程式碼:https://github.com/ZhangLi-CS/GCP_Optimization

Look-into-Object: Self-supervised Structure Modeling for Object Recognition

  • 論文:暫無
  • 程式碼:https://github.com/JDAI-CV/LIO

Video Object Grounding using Semantic Roles in Language Description

  • 論文:https://arxiv.org/abs/2003.10606
  • 程式碼:https://github.com/TheShadow29/vognet-pytorch

Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives

  • 論文:https://arxiv.org/abs/2003.10739
  • 程式碼:https://github.com/d-li14/DHM

SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization

  • 論文:http://www.cs.umd.edu/~yuejiang/papers/SDFDiff.pdf
  • 程式碼:https://github.com/YueJiang-nj/CVPR2020-SDFDiff

On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location

  • 論文:https://arxiv.org/abs/2003.07064

  • 程式碼:https://github.com/oskyhn/CNNs-Without-Borders

GhostNet: More Features from Cheap Operations

  • 論文:https://arxiv.org/abs/1911.11907

  • 程式碼:https://github.com/iamhankai/ghostnet

AdderNet: Do We Really Need Multiplications in Deep Learning?

  • 論文:https://arxiv.org/abs/1912.13200
  • 程式碼:https://github.com/huawei-noah/AdderNet

Deep Image Harmonization via Domain Verification

  • 論文:https://arxiv.org/abs/1911.13239
  • 程式碼:https://github.com/bcmi/Image_Harmonization_Datasets

Blurry Video Frame Interpolation

  • 論文:https://arxiv.org/abs/2002.12259
  • 程式碼:https://github.com/laomao0/BIN

Extremely Dense Point Correspondences using a Learned Feature Descriptor

  • 論文:https://arxiv.org/abs/2003.00619
  • 程式碼:https://github.com/lppllppl920/DenseDescriptorLearning-Pytorch

Filter Grafting for Deep Neural Networks

  • 論文:https://arxiv.org/abs/2001.05868
  • 程式碼:https://github.com/fxmeng/filter-grafting
  • 論文解讀:https://www.zhihu.com/question/372070853/answer/1041569335

Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation

  • 論文:https://arxiv.org/abs/2003.02824
  • 程式碼:https://github.com/cmhungsteve/SSTDA

Detecting Attended Visual Targets in Video

  • 論文:https://arxiv.org/abs/2003.02501

  • 程式碼:https://github.com/ejcgt/attention-target-detection

Deep Image Spatial Transformation for Person Image Generation

  • 論文:https://arxiv.org/abs/2003.00696
  • 程式碼:https://github.com/RenYurui/Global-Flow-Local-Attention

Rethinking Zero-shot Video Classification: End-to-end Training for Realistic Applications

  • 論文:https://arxiv.org/abs/2003.01455
  • 程式碼:https://github.com/bbrattoli/ZeroShotVideoClassification

https://github.com/charlesCXK/3D-SketchAware-SSC

https://github.com/Anonymous20192020/Anonymous_CVPR5767

https://github.com/avirambh/ScopeFlow

https://github.com/csbhr/CDVD-TSP

https://github.com/ymcidence/TBH

https://github.com/yaoyao-liu/mnemonics

https://github.com/meder411/Tangent-Images

https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch

https://github.com/sjmoran/deep_local_parametric_filters

https://github.com/charlesCXK/3D-SketchAware-SSC

https://github.com/bermanmaxim/AOWS

https://github.com/dc3ea9f/look-into-object

不確定中沒中

FADNet: A Fast and Accurate Network for Disparity Estimation

  • 論文:還沒出來
  • 程式碼:https://github.com/HKBU-HPML/FADNet

https://github.com/rFID-submit/RandomFID:不確定中沒中

https://github.com/JackSyu/AE-MSR:不確定中沒中

https://github.com/fastconvnets/cvpr2020:不確定中沒中

https://github.com/aimagelab/meshed-memory-transformer:不確定中沒中

https://github.com/TWSFar/CRGNet:不確定中沒中

https://github.com/CVPR-2020/CDARTS:不確定中沒中

https://github.com/anucvml/ddn-cvprw2020:不確定中沒中

https://github.com/dl-model-recommend/model-trust:不確定中沒中

https://github.com/apratimbhattacharyya18/CVPR-2020-Corr-Prior:不確定中沒中

https://github.com/onetcvpr/O-Net:不確定中沒中

https://github.com/502463708/Microcalcification_Detection:不確定中沒中

https://github.com/anonymous-for-review/cvpr-2020-deep-smoke-machine:不確定中沒中

https://github.com/anonymous-for-review/cvpr-2020-smoke-recognition-dataset:不確定中沒中

https://github.com/cvpr-nonrigid/dataset:不確定中沒中

https://github.com/theFool32/PPBA:不確定中沒中

https://github.com/Realtime-Action-Recognition/Realtime-Action-Recognition

相關文章