View-Dependent Dynamics of Articulated Bodies


Abstract



We propose a method for view-dependent simplification of articulated-body dynamics, which enables an automatic trade-off between visual precision and computational efficiency. We begin by discussing the problem of simplifying the simulation based on visual criteria, and show that it raises a number of challenging questions. We then focus on articulated-body dynamics simulation, and propose a semi-predictive approach which relies on a combination of exact, a priori error metrics computations, and visibility estimations. We suggest several variants of semi-predictive metrics based on hierarchical data structures and the use of graphics hardware, and discuss their relative merits in terms of computational efficiency and precision. Finally, we present several benchmarks and demonstrate how our view-dependent articulated-body dynamics method allows an animator (or a physics engine) to finely tune the visual quality and obtain potentially significant speed-ups during interactive or off-line simulations.

 

 

Key Words



dynamics, kinematics, level-of-detail, simulation, articulated bodies

 

 

Full Text    



pdf (6.5MBytes)

 

 


Download Video


 

 

Benchmark Scenario
Video
Swinging Pendulum

(2.0MB)

(4.4MB)
Haptic-enabled Dog Puppet
(4.0MB)
Hanging Toy Dogs
(19.2MB)
Falling Character
(6.6MB)
Comparions between BVH- and OQ- based Methods
(5.2MB)

 

(Video Codec: DivX 6.0)


 

 

Benchmarking Scenarios


 

 

Swinging Pendulum

 

A pendulum model consisting of three hundred rigid bodies swings because of gravity. View-dependent dynamics is applied to the swinging motion of a pendulum in two series of tests: one by varying the threshold value for motion metrics (Above) and one by varying the viewer’s distance to the pendulum (Below).

 

 

Haptic-enabled Dog Puppet

 

A toy-like dog model consisting of sixteen rigid bodies is interactively manipulated using a haptic interface (a). We use Sensable’s Omni haptic device and map its end-effector to a virtual control stick attached to the toy dog by virtual strings. As the user interactively controls the toy dog, some links of the dog can be hidden by objects in the environment (b). Our view-dependent algorithm automatically rigidifies these links (c).

 

 

Hanging Toy Dogs

 

100 toy-like dog models are attached to springs whose other ends are fixed in space. Initially, the dogs are twisted from the equilibrium state in order to create an initial rotational velocity. Then, the dogs are released and create dynamics simulation. During the simulation, a random torque is intermittently applied to the dogs.

 

 

Falling Character

 

A character consisting of twenty-nine rigid bodies falls on a floor due to gravity. As the viewer moves away from the character, the view-dependent dynamics automatically rigidifies some joints but preserves the
overall look of the motion. Top: Our algorithm automatically simplifies the dynamics of a falling character as its distance to the viewer increases. Bottom: Corresponding rigidification at this time step (one color per rigid group).

 

 

 

Links to Relevant Research


Continuous Collision Detection for Adaptive Simulations of Articulated Bodies project page

by Sujeong Kim, Stephane Redon, Young J. Kim

 

Adaptive Dynamics of Articulated Bodies project page
by Stephane Redon, Nico Galoppo and Ming C. Lin

Fast Continuous Collision Detection for Articulated Models project page

by Stephane Redon, Young J. Kim, Ming C. Lin, Dinesh Manocha

 



Copyright 2006 Computer Graphics Laboratory

Dept of Computer Science & Engineering

Ewha Womans University, Seoul, Korea

[개인정보보호방침]