![]() ![]() While this method is effective, some users may prefer to see a real-time preview of their characters before and during the performance. Previously, when using depth sensors, motion-capturing with iPi Soft was a two-step process. Recently, iPi Soft released version 4.1 of its markerless MoCap system, and with it an important new feature for which users have been waiting a long time, something that has been available only in more costly systems: real-time tracking for multiple depth sensors (see Figure 2). The era of “Motion Capture for the Masses” had begun. IPi’s markerless motion capture worked as promised, without the hassle or expense of a MoCap suit. After jumping around and doing a little dancing, I fed the depth videos into iPi Soft to track my movements and applied the results to a 3D character. I wanted to see how good iPi Soft Motion Capture worked so I got my hands on the depth sensor configuration, consisting of two Kinects, and downloaded iPi Soft’s software from its website:. It also had an option to work with depth sensors like the Microsoft Kinect or small, inexpensive video cameras such as the Sony PlayStation3 Eye or the Logitech C922 (see Figure 1). Instead, the system analyzed the motion of the body and automatically extracted the MoCap data from the motion. In other words, to the great interest of many who work in animation and visual effects, it was offering a new technology that didn’t require an actor to wear any suit at all - or even special clothing for that matter. Several years ago, a company called iPi Soft, based in Moscow, developed a motion capture system that was totally markerless. Figure 2: Real-time tracking in Version 4.1 allows you to immediately preview the motion of the actor on your 3D character. Most of the time, they require actors to wear complex sensor-laden suits. One problem with traditional motion capture, however, is the fact that motion capture systems are often expensive and involved. Today, it’s used extensively in the motion picture industry for visual effects. James Cameron was, in fact, an early pioneer of using motion capture in feature films when he used it as early as Titanic (1997). However, if you want truly life-like movement, or want to integrate 3D characters into live-action shots alongside human actors (think motion pictures like 2009’s Avatar or 2001’s Planet of the Apes), then motion capture is your best bet. That is what the art of animation is all about. ![]() Just be prepared to spend lots of time doing so. If you are looking for stylized, exaggerated or cartoonish motion for your characters - the kind of motion you would expect to see in a Looney Tunes animation - you’ll get the best results by manually animating them. So if motion capture is so fast, why not use it all the time? Why bother animating digital characters by hand at all? The answer to those questions lies in the kind of work you are doing. ![]() Computer animation, unlike motion capture, can take days or even weeks to accomplish. Unlike time-consuming 3D computer animation, in which an animator must manually manipulate the poses of a digital character’s skeletal rig (its underlying virtual armature), motion capture allows you to capture the movement of a character in real time. Motion capture, or MoCap, is the technique of capturing a human actor’s physical performance and applying the captured data to a 3D computer-generated character so that it moves in the same convincing, life-like manner. Figure 1: iPi Soft Motion Capture is a markerless motion capture system that allows you to capture human performances and apply them to 3D digital characters. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |