Ads keep us online. Without them, we wouldn't exist. We don't have paywalls or sell mods - we never will. But every month we have large bills and running ads is our only way to cover them. Please consider unblocking us. Thank you from GameBanana <3


A Tutorial for Half-Life 2

No ads for members. Membership is 100% free. Sign up!
Requirements * A program such as SoundForge (, Adobe Audition ( (or for you Open Source/Freeware types, Audacity ( to pre-process the sound files. * The Faceposer utility from the Source SDK. * The Microsoft speech SDK 5.1 ( * A video capture program such as Fraps ( if you wish to make video files from the performance. * A program such as Windows Movie Maker ( or Adobe Premiere ( to post-process your video feed. * Windows 2000 or XP Pre-Processing of the sound file Faceposer is not designed to handle two to three minute long sound files; when making a Faceposer lip synch project that uses a long sound file, it is necessary that you split the file into smaller files. This is done using a program such as SoundForge and pre-processing the sound files. To pre-process (crop, filter, etc} your sound file in your selected audio editing program, refer to that program's documentation. Use these guidelines for Choosing how long to make files * One sentence of voice, or, if it is long, half a sentence is good. * The smaller the files, the better Faceposer will automatically lip synch. * If you come across a noticeable long spot of instrumental, just select it all and save it separatly, since it doesn’t need to be processed. NOTE: Audio Files MUST be in a supported format by HL2. This means 22050bps 11/22/44kHz 8/16 Bit PCM WAV. Always save your files in that format. After you have split the file into all the tiny pieces you wish to use, you are ready to begin lip synching them in Faceposer. Audio File Placement To have Faceposer properly lip-synch your file, you will need to place it in the \sounds directory of your custom mod (C:\modname\sounds), or the \sounds directory of your HL2, CS:S, or HL2DM folders, Depending on the Game Configursation the SDK is set to. (Program Files\valve\steam\SteamApps\Username\half-life 2\hl2\sound for HL2, Program Files\valve\steam\SteamApps\Username\counter-strike source\cstrike\sound for CS:S, or Program Files\valve\steam\SteamApps\Username\half-life 2 deathmatch\hl2mp\sound for HL2DM) Lip Synching the Sound File Open up Faceposer by navigating to your Program Files\valve\steam\SteamApps\Username\sourcesdk\bin\faceposer.exe, or using the built-in Steam SDK system to open it. [EDIT: The memory leak in the steam menu has been fixed- For now.] Once it boots up and finishes loading, you'll need to load a model - any model that has lip synching capability will do. To load a model, go to the File menu, Select Load Model and select a model in the Open Dialog. Select a character model that would have lip-synch cabailities, such as the G-Man, Breen, Alyx, Dr. Kliener, Barney, or one of the various Civillians. Combine Civil Protection, Metro Cop or Combine Elite models quite obviously do not have lip-sync, but you can make them say things and animate expression gestures. Heck, you could make a talking trashcan if you so wished. :P After you have the model loaded, go into the Options menu and select Center on Face. This will move the camera on your chosen model's face and zoom in on it. Next, you need to select the Phoneme Editor tab. This will bring up a window that has a wave work area and different buttons below. Right click in the wave work area and select Load... Select your first sound file to load it into the generator. Now, if your sound file is just music all you do is right-click and save the file, otherwise follow these steps: Generating Proper Phonemes in the Phoneme Editor * Step 1 o Right click the wave file and select Redo Extraction. You will be prompted to enter the sentence text of what is being spoken or sung. Do so and hit enter. It will now generate phonemes on its own. *It's worth noting the sentence text you type in can be spelt how things sound rather than how they are written. For example if the word "commission" looks incorrect when spoken, it might be best to simplify it's spelling to "kumishun" for easier recognition by the program. * Step 2 o Examine the generated phonemes to be sure it fits the file properly by right clicking the file and selecting Commit Extraction. After doing so, hit play, being sure to watch for errors. * Step 3 o If the lip synch data does not fit the file correctly, then you will need to edit the phonemes manually. To do this, you need to listen and watch closely for the parts that are not lip synched properly. Select the phoneme you wish to edit, and resize (Control+Drag at the edge of a phoneme) and Move (Shift+Drag anywhere on the phoneme) to fit. If you feel a Phoneme should be changed to better fit the lip-synching, simply select the phoneme you wish to change, right click it, and select Edit (phoneme). o Select the phoneme you wish to use from the list. placing your cursor over the various buttons shows an example of that phoneme. (Quick note on phonemes: More info on phonemes can be found here. ( A complete chart of phonemes in Human language can be found Here. ( Of course, Faceposer doesen't get into that much thourough detail about selecting phonemes, but this information is still an informative read about how human speech is engineered, and will help you create more realistic lip-synching.) o Play the file to check if it looks correct. o Keep repeating this process as needed until the .WAV is lip-synched to your satisfaction. * Step 4 o When you are certain the file is perfect, Hit Save Changes. This saves the data from the file yourwav_work.wav to yourwav.wav - though note that if you happen to have a crash or anything, you can always default back to your work using the yourwav_work feature. * Step 5 o Repeat with the rest of your sound files, but keep note that you don’t need to do any extraction for non-vocal files. Just save them and move on. When you finish converting all your sound files into lip synched waves, then it's time to move on to creating the Choreographed scene. Creating a Choreographed Scene To start click in the bottom row of tabs the choreography tab. A window will pop up with an empty timeline structure on it. Next go up into the menu bar and select Choreography and hit New. This will create a new Choreographed scene, which is basically how you combine all the features of Faceposer into one cohesive performance. After you name your scene it will ask you to add the first actor by giving it a name. Name your actor as you please and it will generate a new timeline with your actor added to the scene. Now to begin adding your wave files into the scene you need to first create a channel. A channel can hold any kind of substance such as dialogue or expressions and can have any name though it is always smart to name them something recognizable. So right-click in the area for your first actor and select to create a new channel. Name it something similar to Dialogue. This channel will be used to put all the wave files. Follow these steps to add a your wave files into the channel * Step 1 o Right-Click in the new Channel "Dialogue" and select WAV file... * Step 2 Type the name of the .WAV file you wish to use in the "Sound:" text box. if the file is in a subdirectory of your /sounds directory, give the path to the file, such as dialog/dialog.wav if the file is in /sounds/dialog/dialog.wav. * Step 4 o Give your wave event a name. * Step 5 o Hit Ok and your wave will appear in the channel on the timeline. * Step 6 o Drag it until you have it in the spot you would like it in, and then repeat the steps until you have all your waves in the program and lined up properly on the timeline. Once you finish all your wave files it is time to move on to adding expressive behaviours and other interesting tidbits to your performance to give it a better look. Adding Expressions To start we need to be in the Choreography window. Right-Click and go to New... Channel. Name this channel expressions or something similar to let you know this channel will be containing animation files used to give expression. Next we need to right-click in the channel and add a new Flex Animation. Give it a name and click ok. This will create a editable animation track for use with facial musculature. Before you start animating expand or collapse how big you want the animation to be and place it where it should be. After doing so right-click on the animation and click edit yourexpression in the expression tool. This will open up the expression animation tool and allow you to begin creating animations. If the expression tool does not pop up you'll need to click on flex animation in the bottom row of tabs. Initially your animation will be blank. You need to follow these simple steps to create your animation: Steps to creating a Flex Animation * Step 1 o Right Click anywhere in the tool and click Expand... then All Tracks. * Step 2 o Now we need to reset the flex sliders to default values. This is done by right-clicking anywhere in the Flex Animation tool and selecting Flex... then Copy to Sliders. This copies all the default unchecked values to the flex slider system which is used to generate the different key frames in the animation. * Step 3 o Now we need to create the first key frame in the animation. In the bottom row of tabs hit 3d view then Flex Slider. Adjust the Slider window so you can see the face of the 3d character you'll be modifying. * Step 4 o You need to keep this in mind: If you are going to have a slider change during the animation you need to check it's tickbox regardless if in the first key frame it changes. If you don’t the animation will not come out as intended. Ok, now you need to start moving the sliders back and forth until you get the pose you are looking for. Remember, check all slider tickboxes you plan to modify during this animation. * Step 5 o Now go back to the flex animation tool and pick the time you want the animation key frame to be. NOTE: A good way to line up animations to music and dialogue is by using the choreography tool and dragging the green time bar along till it hits exactly where you want it. * Step 6 o Right-click in the animation tool along the time you want the key frame and select Flex... then Copy From Sliders. This takes the data from the flex sliders and applies it to the animation grid. * Step 7 o Repeat this process till you have all the key frames for your animation and you are finished with learning to create Flex Animations. Now you will probably want to add a way to force the character's eyes on the camera. You do this by going to the choreography tool and right-clicking on the flex animation channel. Next select Look at Actor.... Give it a name and then tell it to Look at !player. Next youll probably want to give your scene even more animation. Another option is using full body pre-canned animations. They are animations like walking and idle animations. You add them by right-clicking in the flex animation channel and selecting Sequence.... Give the sequence a name and then pick the animation from out of the list. Preview your performance and make sure your ready to record. (I advise you leave a small section in the front with a strange face before the music starts to let you know where to que the music to.) Recording the Performance Now you need to get FRAPS open and prep it for recording. In FRAPS dont worry about the choreography window. Fraps only records direct X applications (i.e. the 3d View). Either way the choreography window has to stay selected or it will stop at the end of a clip and not move on to the next. Anyhow after you are ready to record, hit record and then hit play on the choreography window. Let the animation finis,h hit stop, and then end recording. Now you should have an AVI from FRAPS. Now you will need to do the post-processing. Preform the performance ingame Now you want to have your performance acted ingame. Well you have either two options 1) Intergrate your performance into a map file and have a trigger trigger it. See this tutorial ( (this way is best) Or 2) create a live actor by using npc_create npc_name. (EG: npc_create npc_gman) Then aim at them and type in the console: ent_setname npcname; ent_fire npcname setexpressionoverride scenes\subdir\name Replace subdir\name with the location of your performance. (EG scenes\myscenes\gmans_talk) Post-Processing of the Video Feed All I will explain with post-processesing is how you have to have the audio and the video. You will need to put your video onto the video track of the final video. Then put the un-split audio portion on the audio track. Remember how I said put a funny face before the audio starts, well find right where the funny face ends and place your audio track. Now check to make sure it lines up. If it doesn't keep adjusting till it does so. Once finished Render (create the final video) and then share it with the world

Tutorial by WikiPedia.

Sign up to access this!


Сподели банер
URL на изображението
HTML код за вграждане
BB код за вграждане
Markdown код за вграждане




Guest avatar
Guest Joined 14y ago
19 medals 4 legendary 6 rare
  • Submitted 100 Skins Medal icon
  • Submitted 50 Maps Medal icon
  • Submitted 200 Threads Medal icon
  • Submitted 30 Tutorials Medal icon
  • Submitted 50 Skins Medal icon
  • Submitted 20 Maps Medal icon
Sign up to access this!
Sign up to access this!
Sign up to access this!


Sign up to access this!



Difficulty Level



  • Share on Reddit
  • Share on Twitter
  • Share on Facebook
  • 7
  • 25.1k
  • 8
  • 14y
  • 9y

More from Submitter

WiPs by Submitter

More Face Poser Tutorials