FingerPoseNet: A finger-level multitask learning network with residual feature sharing for 3D hand pose estimation

Scritto il 13/03/2025
da Tekie Tsegay Tewolde

Neural Netw. 2025 Mar 10;187:107315. doi: 10.1016/j.neunet.2025.107315. Online ahead of print.

ABSTRACT

Hand pose estimation approaches commonly rely on shared hand feature maps to regress the 3D locations of all hand joints. Subsequently, they struggle to enhance finger-level features which are invaluable in capturing joint-to-finger associations and articulations. To address this limitation, we propose a finger-level multitask learning network with residual feature sharing, named FingerPoseNet, for accurate 3D hand pose estimation from a depth image. FingerPoseNet comprises three stages: (a) a shared base feature map extraction backbone based on pre-trained ResNet-50; (b) a finger-level multitask learning stage that extracts and enhances feature maps for each finger and the palm; and (c) a multitask fusion layer for consolidating the estimation results obtained by each subtask. We exploit multitask learning by decoupling the hand pose estimation task into six subtasks dedicated to each finger and palm. Each subtask is responsible for subtask-specific feature extraction, enhancement, and 3D keypoint regression. To enhance subtask-specific features, we propose a residual feature-sharing approach scaled up to mine supplementary information from all subtasks. Experiments performed on five challenging public hand pose datasets, including ICVL, NYU, MSRA, Hands-2019-Task1, and HO3D-v3 demonstrate significant improvements in accuracy compared with state-of-the-art approaches.

PMID:40081269 | DOI:10.1016/j.neunet.2025.107315