Back when I first started there were a number of things I wanted to do that I never got around to (usually as to do them would either have been hideously expensive, was out of my skill range or simply wasn't possible on a small setup). But now I have both time and space (and on occasion...resources), I thought I'd start doing some of the things I always wanted to do. Hence my recent invention of a hybrid 3d scanning rig capable of both very high detail and its portable without compromising on quality (oh and its a shit lad faster as well...but thats for another day).
But you may be asking 'why' if I can (and do) sculpt likenesses like the one of myself in the blog post below, would I want or need a 3d scanning rig. The answer is simple and rather obvious when you think about it. To create a digital double from scratch it takes time. Time is money and often clients do not have that time and money for me to spend what to them is an age nailing a perfect likeness. So at times like that its actually less expensive for them to have a 3d scan done. But at other times (like for example the Jesse Owens digi double I did for the london olympics opening ceremony last year), scanning is not possible so you sure as hell better have the skills to do it the old fashioned way by hand.
Onto the facial mocap....(which lets face it, is why your probably reading this)
I decided as I was covering the workflow for sculpting and modeling for use in a facial mocap rig, that it'd be fun to do a quick 'advert' for the end user event. It also gave me a chance to start ironing the bugs out of the app I wrote to do the facial mocap...as 2200 frames has a habit of exposing any unusual bugs lol. My system is a rather heath robinson affair at the moment,(look it up if you don't know to what I am referring) but works extremely well. Amazingly so when you bare in mind the programming only took 4 days ...and that was including putting together the rig itself as well. Once I am 100% happy with the rig and app then I will arrange it into something nicer on the eyes lol.
The rig.... what is in it?
|yes that is 3d max, no it is not what I am using for the facial mocap.... I could show you...but I'd have to kill you|
While I have no intention of EVER selling my facial mocap app (its for in house use only), I have no quarms about outlining the actual gear I use.
Here's the 'recipe' list
- 2 Kinect's (360 versions...early models work best...no idea why, I havent worked that out yet)
- 1 Sony Nex5N (sometimes swapped with my canon 5D Mk2, although oddly enough the nex gives better results)
- 1 Zoom H4n high fidelity Audio recorder (no point doing any facial mocap if the sound is shit now is there??? Plus I already had it lying around). This is also an excellent audio to digital interface as well
- My brain
- 1 computer
Now while the example above is uncleaned (apart from the eye blink near the end that needed cleaning) uses 2 kinects, I have got exactly the same quality with a single one.... but two give more 'options' later in the game. If you are wondering how I stop the canceling between them that is inherent when you have more than one kinect... thank Microsoft R and D dept and a tiny bit of info hidden away on the net. All you need is vibrations that on each kinect, it then magically auto syncs to the others. (I'm achieving this by a bit of magic known as an old knackered pair of very loud headphones placed carefully with 'Ace of Spaces' by Motorhead playing ...which works magically best for some reason).
The capture is in real time and while I'm not telling you how it works, it can output to either a blend shapes workflow or to a point cloud to run a Facerobot or similar rig. It enables me to transfer the animation between rigs in about two clicks (thats to a script paul neale run up for me as a favor). It can also theoretically be used on any face, human , animal, or 'other' with ease. S yes you could map your facial movements onto a donkey if you so wished or pretend to be a fucking monkey if thats your bag.
Below you can see the raw mocap data.... there is about 75-90milliseconds delay (perfectly reasonable) only and it does the job. Again....this is early days and a '4 hours coding' version that if I were a software house wouldn't even count as a bloody alpha version, let alone a beta lol. But I know that with some refining its perfectly possible to increase motion fidelity about another 70% at least.
The only thing that mildy pisses me off is that as I had a ver tight self imposed deadline on this the model suffered as a result as did the render set up (render time could be no more than 1 min a frame.... so its a bit shit as a result.)
By anyway.... enjoy and remember I'm covering both digital doubles (lecture number 1 at EUE), that will cover hand sculpting digital doubles and handling crap scan data you will often get handed. Lecture 2 covers sculpting and modeling for facial motion capture and covers my own workflow to enable you to get a flu set of blend shapes done in about 20-40mins..oh and the facial motion capture setup and some explanation how you can set a similar rig up.