They will sync because they were reocrded together. Now, and here comes the nuanced but important point, any audio recorded at the same time with video will sync with the specific frame rate used when recording that video. If I give you an audio file and nothing else, you could easily figure out the bit depth and sampling rate but you would have no idea about the frame rate used on the associated video. Audio doesn´t intrinsically have a frame rate value the same way it has a bit depth and sampling rate. Some people think that audio can be set to be recorded at a certain frame rate the same way it can be set to be recorded at a certain sampling frequency. Let´s see how this kind of conversion is done. This would happen if, for example, a movie (24 fps) is brought into european TV (25 fps) or an american TV programme (29.97 fps) is brought into India, which uses PAL (25 fps). The amount of motion is also very important, higher fps will be the best at showing fast motions.īut how can these different frame rates affect audio sync? The problem usually starts when a project is filmed at a certain rate and then converted to a different one for distribution. Videogames, which many times use high frame rates like 60 fps and beyond, are partially responsible for this taste shift. This is probably a cultural perception and is definitely changing. 24 fps “feels” cinematic and “premium” while sometimes the higher frame rates used in TV feel “cheap”. Keep in mind that these frame rates are different not only on a technical level but also on a stylistic level. 24 frames per second (or just fps) is the standard for cinema, while TV uses 25 fps in europe (PAL) and 29.97 fps in the US (NTSC). This value is simply how many pictures per second are put together to create our film or video. There´s a lot to be said on the subject of frame rate but I will just keep it short. This doesn´t really affect the pull up/down process. Higher values will give us more dynamic range, since a bigger range of intensity values will be captured. It measures hoy many bits we use to encode the information of each of our samples. This is the other parameter that we need to take into consideration when converting to digial audio. Something that is important to remember and that will become relevant later on is that a piece of audio is always going to be the same lenght as long as it is played at the same sample rate that it was recorded.įor the sake of completion, I just want to mention audo resolution (or bit depth) briefly. Once we know this, let´s see the most comonly used sampling rates:Īs you can see, most professional formats use a sampling rate higher than 40 Khz to guarantee that we capture the full frequency spectrum. Since the highest frequency humans can hear is around 20Khz, a sampling rate of 40Khz should suffice. It basically says that we need about twice the sampling rate of the highest frequency we want to capture. The Nyquist-Shannon sampling theorem gives us a very good estimation. So how fast do we need to be for accurate results? Keep in mind that if our sampling rate is not fast enough, we won´t be able to “capture” the higher frequencies since these would fluctuate faster than we can measure. The number of times we do this per second is what determines sampling rate and is measured in Hertzs. In order to get a faithful signal, we need to sample our waveforms many times. These “points” are usually called samples. Whenever we are converting an audio signal from analogue to digital, all we are doing is checking where the waveform is at certain “points” in its oscilation. Feel free to skip this if you have it fresh. First, we need to understand some basic digital audio concepts.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |