Neural Voting Field for Camera-Space 3D Hand Pose Estimation


Lin Huang1*
Chung-Ching Lin2
Kevin Lin2
Lin Liang2
Lijuan Wang2
Junsong Yuan1
Zicheng Liu2

1University at Buffalo
2Microsoft
*Work done during Lin Huang’s internship with Microsoft

[Paper]
[Code]
[Video]

CVPR 2023


We present a unified framework for camera-space 3D hand pose estimation from a single RGB image based on 3D implicit representation. As opposed to recent works, most of which first adopt holistic or pixel-level dense regression to obtain relative 3D hand pose and then follow with complex second-stage operations for 3D global root or scale recovery, we propose a novel unified 3D dense regression scheme to estimate camera-space 3D hand pose via dense 3D point-wise voting in camera frustum. Through direct dense modeling in 3D domain inspired by Pixel-aligned Implicit Functions for 3D detailed reconstruction, our proposed Neural Voting Field (NVF) fully models 3D dense local evidence and hand global geometry, helping to alleviate common 2D-to-3D ambiguities. Specifically, for a 3D query point in camera frustum and its pixel-aligned image feature, NVF, represented by a Multi-Layer Perceptron, regresses: (i) its signed distance to the hand surface; (ii) a set of 4D offset vectors (1D voting weight and 3D directional vector to each hand joint). Following a vote-casting scheme, 4D offset vectors from near-surface points are selected to calculate the 3D hand joint coordinates by a weighted average. Experiments demonstrate that NVF outperforms existing state-of-the-art algorithms on FreiHAND dataset for camera-space 3D hand pose estimation. We also adapt NVF to the classic task of root-relative 3D hand pose estimation, for which NVF also obtains state-of-the-art results on HO3D dataset.

Pipeline


Overview


Paper

L. Huang, C. Lin, K. Lin, L. Liang, L. Wang, J. Yuan, Z. Liu
Neural Voting Field for Camera-Space 3D Hand Pose Estimation
CVPR 2023
(hosted on arXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.