SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans

Igor Santesteban, Elena Garces, Miguel A. Otaduy, and Dan Casas
Computer Graphics Forum (Proc. of Eurographics), 2020



Abstract

We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion. Datasets to learn such task are scarce and expensive to generate, which makes training models prone to overfitting. At the core of our method there are three key contributions that enable us to model highly realistic dynamics and better generalization capabilities than state-of-the-art methods, while training on the same data. First, a novel motion descriptor that disentangles the standard pose representation by removing subject-specific features; second, a neural-network-based recurrent regressor that generalizes to unseen shapes and motions; and third, a highly efficient nonlinear deformation subspace capable of representing soft-tissue deformations of arbitrary shapes. We demonstrate qualitative and quantitative improvements over existing methods and, additionally, we show the robustness of our method on a variety of motion capture databases.

Files



Citation

@article {santesteban2020softsmpl,
    journal = {Computer Graphics Forum (Proc. Eurographics)},
    title = {{SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans}},
    author = {Santesteban, Igor and Garces, Elena and Otaduy, Miguel A. and Casas, Dan},
    year = {2020}
}

Description and Results

SoftSMPL is a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion.

Our method runs at real-time rates, and allows to interactively manipulate the shape of the character while visualizing the regressed dynamics. Notice how the soft tissue deformation changes when the shape parameter is modified.

At the core of our method there is a neural network based soft-tissue regressor that outputs per-vertex 3D offsets encoded in a novel and highly efficient nonlinear subspace. Key to our method is the observation that traditional pose representations for human models are entangled with subject and shape specific features. We propose a novel pose descriptor to disentangle the pose space, producing a lower-dimensional representation that keeps the global pose of the actor while removing local features. Additionally, we mitigate dynamic pose features also entangled in the pose vector by a novel motion transfer technique.

Below we show the generalization capabilities of our method using MoCap sequences from the CMU dataset for a variety of body shapes. For additional results, check the supplementary video.





Contact

Igor Santesteban – igor.santesteban@urjc.es
Dan Casas – dan.casas@urjc.es