Human pose estimation using deep convolutional DenseNet hourglass network with intermediate points voting

Shek Wai Chu, Yang Song, Ju Jia Zou, Weidong Cai

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

6 Citations (Scopus)

Abstract

![CDATA[Human pose estimation is a long-standing and challenging problem in computer vision. The problem involves high freedom of articulation of body limbs, different occlusions such as self-occlusion or occlusion by other objects or persons, various clothing, various background in the natural image and foreshortening due to different capturing angle of the camera. In this work we present 1) how the DenseNet module can be used to improve the original ResNet hourglass model, 2) how intermediate points derived from ground truth joint segments can be used as output augmentation of a convolutional neural network (ConvNet) to improve the prediction accuracy. Further improvement has also been made via intermediate points voting by optimizing the joint probability distribution of human joints and the intermediate points. Experimental results on the effects of intermediate point and optimization scheme are presented. We are able to achieve competitive results to the state-of-the-art methods by the proposed method.]]
Original languageEnglish
Title of host publicationProceedings of the 2019 IEEE International Conference on Image Processing, September 22-25, 2019, Taipei International Convention Center (TICC), Taipei, Taiwan
PublisherIEEE
Pages594-598
Number of pages5
ISBN (Print)9781538662496
DOIs
Publication statusPublished - 2019
EventInternational Conference on Image Processing -
Duration: 22 Sept 2019 → …

Conference

ConferenceInternational Conference on Image Processing
Period22/09/19 → …

Keywords

  • computer vision
  • deep learning
  • human pose estimation
  • joints

Fingerprint

Dive into the research topics of 'Human pose estimation using deep convolutional DenseNet hourglass network with intermediate points voting'. Together they form a unique fingerprint.

Cite this