We explore the use of full-body 3D physical simulation for human kinematic tracking from monocular and multi-view video sequences within the Bayesian filtering framework. Towards greater physical plausibility, we consider a human's motion to be generated by a ``feedback control loop'', where Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of forces. The result is more faithful modeling of human-environment interactions, such as ground contacts, resulting from collisions and the human's motor control.
Human control of high degree-of-freedom robotic systems is often difficult due to the overwhelming number of variables that need to be specified. Instead, we propose the use of sparse subspaces embedded within the pose space of a robotic system. Driven by human motion, we addressed this sparse control problem by uncovering 2D subspaces that allow cursor control, or eventually decoding of neural activity, to drive a robotic hand. Considering the problems in previous work related to noise in pose graph construction and motion capture, we introduced a method for denoising neighborhood graphs for embedding hand motion into 2D spaces. Such spaces allow for control of high-DOF systems using 2D interfaces such as cursor control via mouse or decoding of neural activity. We present results demonstrating our approach to interactive sparse control for successful power grasping and precision grasping using a 13 DOF robot hand.
There is currently a division between real-world human performance and the decision making of socially interactive robots. Specifically, the decision making of robots needs to have information about the decision making of its human collaborators. This circumstance is partially due to the difficulty in estimating human cues, such as pose and gesture, from robot sensing. Towards crossing this division, we present a method for kinematic pose estimation and action recognition from monocular robot vision through the use of dynamical human motion vocabularies.
我们提出的方法阐述,及功率ng meshes, in particular facial meshes, through a 2D sketching interface. Our method establishes an interface between 3D meshes and 2D sketching with the inference of reference and target curves. Reference curves allow for user selection of features on a mesh and their manipulation to match a target curve. Our articulation system uses these curves to specify the deformations of a character rig, forming a coordinate space of mesh poses. Given such a coordinate space, our posing system uses reference and target curves to find the optimal pose of the mesh with respect to the sketch input. We present results demonstrating the efficacy of our method for mesh articulation, mesh posing with articulations generated in both Maya and our sketch-based system, and mesh animation using human features from video. Through our method, we aim to both provide novice-accessible articulation and posing mesh interfaces and rapid prototyping of complex deformations for more experienced users.
The SmURV platform (Small Universal Robotics Vehicle) is a comparatively cheap and easy to assemble robotics platform for educational, research and hobby purposes. Essentially, the platform consists of an iRobot Create and a small computer mounted on top of it. Due to this very simple design, using components available at any larger electronics store, we are able to build an autonomous robot in a very short amount of time and can focus on the real interesting part: Namely, Making the robot do something exciting! In the following paragraphs we show what parts are needed to build a SmURV, how much they approximately cost and how they have to be assembled. Further, we offer one particular (out of many possible) software solutions to control a SmURV and to write software for it.
Towards accessible teaching of robots from demonstration, we have developed a mixed-reality distributed multi-player robotic gaming environment. Our goal is to provide robot learning researchers with an means to collect large corpora of data representative of human decision making. Robot control by an human operator (or teleoperation) is cast in a video game style interface to leverage the ubiquity and popularity of games while minimizing tedium in robot training.