Lip-synching Flash

Now, here’s the part we’ve all been waiting for . . . a word from our character. If done properly, lip-synching is where a character can really spring to life. This is accomplished by drawing the various mouth positions that are formed for individual phonemes, which are the basic units of sound that make up a spoken word.

Then these phonemes are melded together into morphemes, which are distinct units of a word, like a syllable. Morphemes are then strung together over the course of a sentence to present the illusion of a talking, animated character. Huh? Phonemes? Morphemes? What the devil are we talking about? Well, it’s really not as complicated as all that but it’s important to know how a spoken word is made. Most languages, although populated with thousands of words, are really made up from around 30 to 60 distinct sounds, or phonemes. For cartooning, these phonemes can be reduced to about 10 basic mouth positions. Some of these positions can be repeated for more than one sound because many sounds share roughly the same mouth positions. Although there are more subtleties in the real world, for cartoons, reliance upon transitions between mouth positions is convincing enough.

Earlier, we suggested that the face in an action (walk) cycle should be drawn without a mouth. That’s because this method facilitates the use of layers (in the timeline) for the addition of the lip-sync. To do this, create a layer above the character so that you can freely draw in the mouth positions needed to add lip-sync. It’s also very helpful to put the voice track on another separate layer directly beneath the Mouth layer. This makes it easy to see the waveform of the sound while you draw which gives important clues to where and when the sound occurs visually.

Since Flash 4, Flash has had the capability to scrub the timeline, which means that you can drag the Playhead, or current frame indicator, and hear the sound as you drag. This functionality is limited to streaming sounds, which means that the sounds have their Sync option in the Sound panel set to Streaming. The capability to hear the sound and see the animation in real time is an important tool for lip-synching. This real-time feedback is critical for getting the timing just right. There’s nothing worse than being plagued with O.G.M.S. (the Old Godzilla Movie Syndrome), in which the mouth doesn’t match the sounds coming from it. To scrub most effectively, here’s a hint: If you’ve been following this, then you’ve probably loaded a ton of moving bitmaps into your scene, which can be a serious hindrance to playback within the Flash authoring environment. To overcome this drag and to get real-time playback at the full-frame rate, simply hide all layers except the mouth layers and turn off antialiasing.

Shape morphing is not for lip-sync
You may be asking, “What about using shape morphing to save time in lip-synching?” Well, shape morphing is a wonderful tool but, for lip-sync, it’s more hassle than it’s worth. Your mouth drawings will become very complicated because they consist of lips, tongue, teeth, and facial features. Furthermore, because shape morphing only seems to work predictably on the simplest of shapes out of the box, shape hinting is required. Thus, by the time you’ve set all hinting (and even hinting heavily still leaves you with a mess at times), you might have had an easier time and obtained a better result (with greater control) if you had drawn it by hand.

Expression and lip-sync
As regards control and expression, it’s important to remember to use the full range of expression when drawing the talking mouths. Happy, sad, or confused these give life to your character. Furthermore, always emphasize mouth movements on those syllables that correspond with spikes of emotion in the voice track. These sections usually have a spike in the waveform that’s easily recognized in the voice track. This device helps to convince the viewer that proper sync is happening.

Lip-sync tricks
There are a few more tricks to help ease the load. When characters talk, they do not always have to be looking you square in the face. Try lip-synching the first few words to establish that the character is speaking, and then obscure the character’smouth in some natural way. The relay man, shown in Figure below, in Weber’s intestine is a good example of this. The head and body bobs with the words being said, but the microphone obscures his mouth in a natural way. This saved a bunch of time but did not detract from his purpose in the story line. Here, a bit of design savvy saved a lot of work.

Lip-synching tricks include economy of effort, such as having the character begin to speak and then turn away naturally.

Lip-synching tricks include economy of effort, such as having the character begin to speak and then turn away naturally.

Many animators use a mirror placed nearby and mouth (act out) the words they are trying to draw. This is extremely helpful when learning to do lip-sync. It is also of great help in mastering facial expressions. Just try not to get too wrapped up in drawing every nuance you see. Sometimes less is more. After you get over feeling a bit foolish about talking to yourself in the mirror you’ll be on your way to animating good expressive lip-synced sequences. Another trick that you can use to ease the load is to reuse lip-sync. Do this by copying frames from previous stretches of mouth movements to new locations where the words are the same, and then tweak the copied parts to fit the new dialog. Still, there is no magic lip-sync button. Even with all these tricks, effective lip-syncing is hard work. It’s also one of the more tedious tasks in animation, as it demands a great deal of practice to get it right.

Synching with music and sound effects
In the introduction, Weber dances to the theme song, shuffling through a Michael Jackson moonwalk, and then spinning to the scratch of the synthesizer. This really helps to gel things because the action on screen syncs to the sound (music or effect) and helps to draw in the viewer. If you’ve already succeeded with lip-synching work, then this type of synching is easy. All that’s going on here is a bit of instance swapping set to the beat of the music. Study your music waveform for visual clues then scrub it for the sound and you’re sure to find the exact section where the change in action (instance swap) needs to go. You don’t have to make your sync tight to every note. To keep the shot engaging, sync to the highlights, or hard beats.

Adding sound effects is really the fun part. It’s easy and highly effective. Either working from your storyboard, or as you’re animating, you’ll know where you want to insert a sound effect. For example, when the anvil hits the head, a CLANK is needed there. If the effect you need is on hand, great! Just make sure it has the necessary duration, and then plug it in at the frame where it should start. For broadcast animation you’ll set the sound sync pop-up of the Sound panel to Streaming for the soundtrack exclusively. In addition to the use of separate layers for each voice track, it’s wise to confine your sound effects to a layer or two. This leads to less confusion; yet using two layers enables more than one sound effect to occur at a time.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Flash Topics