Title | ||
---|---|---|
Fine-Grained Semantic Segmentation of Motion Capture Data using Dilated Temporal Fully-Convolutional Networks. |
Abstract | ||
---|---|---|
Human motion capture data has been widely used in data-driven character animation. In order to generate realistic, natural-looking motions, most data-driven approaches require considerable efforts of pre-processing, including motion segmentation and annotation. Existing (semi-) automatic solutions either require hand-crafted features for motion segmentation or do not produce the semantic annotations required for motion synthesis and building large-scale motion databases. In addition, human labeled annotation data suffers from inter- and intra-labeler inconsistencies by design. We propose a semi-automatic framework for semantic segmentation of motion capture data based on supervised machine learning techniques. It first transforms a motion capture sequence into a ``motion imageu0027u0027 and applies a convolutional neural network for image segmentation. Dilated temporal convolutions enable the extraction of temporal information from a large receptive field. Our model outperforms two state-of-the-art models for action segmentation, as well as a popular network for sequence modeling. Most of all, our method is very robust under noisy and inaccurate training labels and thus can handle human errors during the labeling process. |
Year | DOI | Venue |
---|---|---|
2019 | 10.2312/egs.20191017 | Eurographics |
DocType | Volume | Citations |
Journal | abs/1903.00695 | 0 |
PageRank | References | Authors |
0.34 | 7 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Noshaba Cheema | 1 | 2 | 3.75 |
Somayeh Hosseini | 2 | 2 | 1.38 |
Janis Sprenger | 3 | 0 | 1.01 |
Erik Herrmann | 4 | 6 | 3.90 |
Han Du | 5 | 6 | 3.90 |
Klaus Fischer | 6 | 495 | 52.85 |
Philipp Slusallek | 7 | 2420 | 231.27 |