For this project we started by using a basic face mask mesh available on Blender Swap created by M.Talholodhi under the Creative Commons.
The basic mesh has locks on location, rotation and scale. Hitting “N” to pull up the properties region panel and then unlock (unclick the checkbox) in order to move or change size, etc.
Initially we decided on a Mardi Gras theme for a long term club project. I have removed all constraints and bones of the basic blend file, so we are back to the basic mesh available for download here. –> Mardi Gras Mask
The idea for a quicker club project was to have club member add on the additional face mask ribbons and bells without objects being directly connected to the mask object/vertices. We were hoping that this would provide an easy exercise and a variety of types of ribbons and techniques to discuss at the next meeting. You can easily do this by creating a seperate object while in object mode.
We then loaded a reference photo of a Mardi Gras mask to build and warp the ribbons by parenting them to bezeir curves. Next we UV wrapped the face mask onto the background image just to get a quick idea of what the finished model might look like. As you can see by looking at the side of the mask, one of the issues that UV unwrapping has is that when you unwrap it it is alot like peeling a world map. Round objects do not flatten well, which is why there are several selections of how you can UV unwrap.
In this example, the face was straight on when unwrapped and the UV unwrap selection was “Project from view. ” While this did a pretty good job is you look at the side of the face though you will see that it is not uniformed and the sides are a bit blurred. To prevent this, a test UV map is used, like the one pictured here, so that the UV vertices can be adjusted proportionally in the UV mapping window.
To learn more about UV mapping here is a brief tutorial –> UV MAPPING
On a side note, as Art Director Lisa Taylor points out, even though the face is a 3d image, it lacks a certain amount of depth. While lighting helps with shadows the image is still a bit flat. By adding additional mesh /UV images / with transparent backgrounds we gain more feel of the third dimension as pictured here:
Another little suggestion from Lisa was to raise the crown so that it doesn’t look like a unibrow. And yes, on the finished product we did just that, which you will see later. For now, if you want to learn more about UV images with transparent image alpha maps nodes click here –> TRANSPARENT IMAGES
Now that we have a colored model that we are content with working with we now get into the Lip Sync process.
Step 1. Make (record) you own sound bite using a program like –> AUDACITY or download a movie audio clip. The MP3 type of file format is preferred.
For a tutorial on making shape keys –> SHAPE KEYS
Here we get into the first hurdles of using free/open source programs. Since Papagayo is a freebie the amount of support for it is limited. One bug is that if you save a work file you can’t open it again later. . (This is now fixed in version 2.0 beta but other options are not available.) So getting back to version 1.2, any sound bites need to be done on the spot.
Another issue is that you in the earlier version you are also limited to how much text or audio you can process in one sitting. Since good cinematography recommends that no film shot be longer than 10 seconds, we would recommend using this limit as a general principle. For this project we pushed the limits with an 18 second sound clip and experienced no issues.
Once you’ve loaded the sound bite into Papagayo, added your text, produced your phonetic breakdown and exported the .dat file (which does work) you can get back into Blender.
Step 4. Adding the Lip Sync – Add-on
The Blender-wiki section on “lip sync” pictured below has the link for this file or you can just click –> LIP SYNC and read the paragraph below.
After experiencing several tutorials and on-line pages of vagueness we discovered that in order to get the LIP-SYNC add-on for blender you need to first click on the on-line file to open up the text. THEN right-click to download it. Place the download in the Blender add-on file folder or in the work folder you are using for your project.
Then go to preferences in Blender. Click on add-ons and at the bottom of the screen you will see a the button “Install from file” and just click on that to load the add-on into blender. Save settings and now in the tool bar you should see a lip sync tool. If you don’t or it doesn’t work correctly then go back into preferences and change your file pathing by un-clicking the relative button.
With the lip-sync tool showing/working you can now click on the .dat file you generated from papagayo and run it. If you named all the shape keys the same as papagayo this will instantly generate all your mouthing key frames. If not, the tool gives you the ability to link any shape key to any of the papagayo phonemes
any of the perceptually distinct units of sound in a specified language that distinguish one word from another, for example p, b, d, and t in the English words pad, pat, bad, and bat.
With all that done now, we are ready for the final shake and bake.
For our example we then rendered frames 18-438 at 100 samples in cycles, render settings HQ of 640 x 480, 90% compression, using a dual core processor with no graphic card.
And after 4 hours of rendering plus 5 minutes of windows movie maker editing the results are: (patience – there is an 8 second lead in.)
If there are any suggestions, comments or questions please let us know. Thx. Alex