Lipsync

From Synfig Studio :: Documentation
Jump to: navigation, search
(I put back the preston links, many references are linked to preston in this tutorial, so the links are up to date.)
(addition of method2)
 
Line 6: Line 6:
 
<!-- Page info end -->
 
<!-- Page info end -->
  
== Beginning ==
+
 
 +
== Method1 (recent) ==
 +
 
 +
It is '''important''' to stress from the start that the explanation of method1, so the text below was made by watching the video, and then adding the comments, so it is '''highly recommended''' to start by <u>watching video1 and then video2.</u>
 +
 
 +
 
 +
Start ''Papagayo'', open your <u>audio-file.wav</u> then write the text corresponding to the audio, choose the language of the text, click {{Shortcut|breakdown}}, the text will then appear on the audio wave, drag with the mouse cursor the part <code style="color: #009300;">green</code> corresponding to the whole sentence, now drag the mouse cursor corresponding to the words in <code style="color: #d17812;">orange</code>, there's no need to be really precise, just keep it simple, as long as the audio roughly matches, (it's up to you), then click on the triangle shaped button {{Shortcut|play}}, when it's good you save the work as <u>file.pgo</u>
 +
the ''Papagayo'' part is over.
 +
 
 +
 
 +
Then you open ''Synfig Studio'', it will be much easier to understand ''<u>to watch</u>'' the video, let's try to detail the video together, open your <u>file.sifz</u>
 +
containing your layers and groups, now open all the child layers corresponding to the mouth of your character, then you check all the boxes corresponding to all the mouth positions.
 +
Now you import your <u>file.pgo</u> and move it to the bottom of the layer corresponding to the last mouth, now open all the child layers corresponding to your <u>file.pgo</u> and tick all the layers.
 +
 
 +
 
 +
Now you have to move your drawing to the layers of your <u>file.pgo</u>
 +
so if your drawing corresponding to the mouth <code>AI</code>, you'll have to move it to the <u>file.pgo</u> corresponding to <code>AI</code>, it's easier to <u>''watch the video to understand.''</u>
 +
Do this for all the mouth layers, after that you can do {{Shortcut|play}} to see the mouth come alive.
 +
 
 +
 
 +
We will add a '''stroboscope''' effect, (it is in the category <u>time</u>) to slow down the mouth because it is too fast, the mouths cross each other and we don't see a break. To understand a simple example: <code>''if a person speaks without pausing you won't understand the sentence because the words will be attached''.</code>
 +
 
 +
If now we put our '''stroboscope''', there will be a break a tiny silence between words to delineate the words and understand the sentence.
 +
 
 +
 
 +
The <u>stroboscope</u> must be placed at the top of the group <u>file.pgo</u>, in the video example we put, '''12''' it gives something good, free for you to put more or less.
 +
 
 +
The explanation is over you now have an animation with the mouth moving according to the voice.
 +
 
 +
 
 +
 
 +
Now you'll see a rather '''important''' paragraph, it's about the "mouth positions" in the program ''Papagayo'' there are 7 different styles, the style being ''manga'', ''comic'', ''3d'', etc... (the style is not very interesting for us)
 +
now let's take a closer look at the <u>positions of the mouths</u> there are therefore '''10 different positions''' according to the program ''Papagayo'' they are defined in:
 +
<code>AI</code> <code>E</code> <code>etc</code> <code>FV</code> <code>L</code> <code>MBP</code> <code>O</code> <code>rest</code> <code>U</code> <code>WQ</code>,
 +
you can see the positions of mouths in the video, or by going to the temporary folder of  ''Papagayo'', under Linux apparently the folder is located in <code>tmp/.mount_IENJFN/usr/opt/papagayong/rsrc/mouths/</code> this link should probably change if you're on Windows, or depending on the version of ''Papagayo'' used.
 +
 
 +
 
 +
On the video you will see the images of the mouths for the <u>10 positions and the 7 different styles.</u>
 +
 
 +
This text explanation is based on the video2, the <u>video1</u> is faster to understand so we advise you to watch the <u>video1</u> first, then watch the video2.
 +
 
 +
== Links ==
 +
 
 +
Video link1: [https://youtu.be/-Y0Ox0cnlL4 Synfig + Papagayo: Lipsync Tutorial]
 +
 
 +
Video link2: [https://youtu.be/M1jl9F6k0BY Lipsync Tutorial - Synfig - Papagayo]
 +
 
 +
 
 +
 
 +
Download Papagayo:
 +
* Windows and Linux: [http://morevnaproject.org/papagayo/ http://morevnaproject.org/papagayo/]
 +
* OSX: [http://my.smithmicro.com/papagayo.html http://my.smithmicro.com/papagayo.html]
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
== Method2 (former) ==
 +
 
 +
 
 +
 
  
 
This is a small tutorial on how I do the lipsync. We will need an audio recorder (eg Audacity), the program Papagayo, a video editor (like Avidemux), a text editor (like gedit) and of course Synfig.
 
This is a small tutorial on how I do the lipsync. We will need an audio recorder (eg Audacity), the program Papagayo, a video editor (like Avidemux), a text editor (like gedit) and of course Synfig.
Line 48: Line 112:
  
 
I hope it can help you.
 
I hope it can help you.
 
== Links ==
 
 
Video link: [https://youtu.be/-Y0Ox0cnlL4 Synfig + Papagayo: Lipsync Tutorial]
 
 
 
Download Papagayo:
 
* Windows and Linux: [http://morevnaproject.org/papagayo/ http://morevnaproject.org/papagayo/]
 
* OSX: [http://my.smithmicro.com/papagayo.html http://my.smithmicro.com/papagayo.html]
 

Latest revision as of 18:47, 14 January 2020

Languages Language: 

English • español • français



Method1 (recent)

It is important to stress from the start that the explanation of method1, so the text below was made by watching the video, and then adding the comments, so it is highly recommended to start by watching video1 and then video2.


Start Papagayo, open your audio-file.wav then write the text corresponding to the audio, choose the language of the text, click Breakdown, the text will then appear on the audio wave, drag with the mouse cursor the part green corresponding to the whole sentence, now drag the mouse cursor corresponding to the words in orange, there's no need to be really precise, just keep it simple, as long as the audio roughly matches, (it's up to you), then click on the triangle shaped button Play, when it's good you save the work as file.pgo the Papagayo part is over.


Then you open Synfig Studio, it will be much easier to understand to watch the video, let's try to detail the video together, open your file.sifz containing your layers and groups, now open all the child layers corresponding to the mouth of your character, then you check all the boxes corresponding to all the mouth positions. Now you import your file.pgo and move it to the bottom of the layer corresponding to the last mouth, now open all the child layers corresponding to your file.pgo and tick all the layers.


Now you have to move your drawing to the layers of your file.pgo so if your drawing corresponding to the mouth AI, you'll have to move it to the file.pgo corresponding to AI, it's easier to watch the video to understand. Do this for all the mouth layers, after that you can do Play to see the mouth come alive.


We will add a stroboscope effect, (it is in the category time) to slow down the mouth because it is too fast, the mouths cross each other and we don't see a break. To understand a simple example: if a person speaks without pausing you won't understand the sentence because the words will be attached.

If now we put our stroboscope, there will be a break a tiny silence between words to delineate the words and understand the sentence.


The stroboscope must be placed at the top of the group file.pgo, in the video example we put, 12 it gives something good, free for you to put more or less.

The explanation is over you now have an animation with the mouth moving according to the voice.


Now you'll see a rather important paragraph, it's about the "mouth positions" in the program Papagayo there are 7 different styles, the style being manga, comic, 3d, etc... (the style is not very interesting for us) now let's take a closer look at the positions of the mouths there are therefore 10 different positions according to the program Papagayo they are defined in: AI E etc FV L MBP O rest U WQ, you can see the positions of mouths in the video, or by going to the temporary folder of Papagayo, under Linux apparently the folder is located in tmp/.mount_IENJFN/usr/opt/papagayong/rsrc/mouths/ this link should probably change if you're on Windows, or depending on the version of Papagayo used.


On the video you will see the images of the mouths for the 10 positions and the 7 different styles.

This text explanation is based on the video2, the video1 is faster to understand so we advise you to watch the video1 first, then watch the video2.

Links

Video link1: Synfig + Papagayo: Lipsync Tutorial

Video link2: Lipsync Tutorial - Synfig - Papagayo


Download Papagayo:





Method2 (former)

This is a small tutorial on how I do the lipsync. We will need an audio recorder (eg Audacity), the program Papagayo, a video editor (like Avidemux), a text editor (like gedit) and of course Synfig.

I suggest you look at Preston Blair's drawings. Link1 Link2 Link3 containing 35-page pdf drawings


We have done our draw (in this case in front view).

1. Record with Audacity the text you want to use.

2. Then fix it (if necessary) and export it to WAV format. We'll name it "Texto.wav".

3. Open the file Texto.wav with Papagayo.

4. Fixed the text according to Papagayo instructions and save as "Texto.pgo".

5. Once settled choose "voice Export ..." We will save it as "Texto.dat".

6. Open "Texto.dat" with gedit. We'll note the position of the corresponding phonemes, so 1 corresponds to 1f, 24 corresponds to 1s, 25 corresponds to 1s 1f, 50 corresponds to 2s 2f, etc.


Now we know the exact position of each phoneme. We still to decide how many different poses we want. (We can use the outline given by Preston Blair, or use your own).


Come to our animation. Let's make the mouth move, without heads.

1. We export the mouth (if we follow the Preston Blair draws, we'll export the head).

2. Change Interpolation to constant.

3. Now, on the exported canvas, we will draw each pose in the appropriate frame, doubling the keyframe when necessary. If the text speed is very fast, then it's not necessary to draw all the phonemes.


Also, we can draw all the phonemes we need in the first second and start the animation from frame 1s, so we will double the corresponding keyframe at any time. In this case we must start to render from 1s.

Another thing we can do is create a library of phonemes, taking advantage of the Group layer as in this tutorial. http://vimeo.com/10318012 .


Then we'll use the video editor (Avidemux) to add audio to video.

I hope it can help you.


Languages Language: 

English • español • français