Splitting the audio based on silence

One way to process the audio file is to split it into chunks of constant size. For example, we can take an audio file which is 10 minutes long and split it into 60 chunks each of length 10 seconds. We can then feed these chunks to the API and convert speech to text by concatenating the results of all these chunks. This method is inaccurate. Splitting the audio file into chunks of constant size might interrupt sentences in between and we might lose some important words in the process. This is because the audio file might end before a word is completely spoken and google will not be able to recognize incomplete words.

The other way is to split the audio file based on silence. Humans pause for a short amount of time between sentences. If we can split the audio file into chunks based on these silences, then we can process the file sentence by sentence and concatenate them to get the result. This approach is more accurate than the previous one because we do not cut sentences in between and the audio chunk will contain the entire sentence without any interruptions. This way, we don’t need to split it into chunks of constant length.

The disadvantage of this method is that it is difficult to determine the length of silence to split because different users speak differently and some users might pause for 1 second in between sentences whereas some may pause for just 0.5 seconds.

Python | Speech recognition on large audio files



Speech recognition
is the process of converting audio into text. This is commonly used in voice assistants like Alexa, Siri, etc. Python provides an API called SpeechRecognition to allow us to convert audio into text for further processing. In this article, we will look at converting large or long audio files into text using the SpeechRecognition API in python.

Similar Reads

Processing Large audio files

When the input is a long audio file, the accuracy of speech recognition decreases. Moreover, Google speech recognition API cannot recognize long audio files with good accuracy. Therefore, we need to process the audio file into smaller chunks and then feed these chunks to the API. Doing this improves accuracy and allows us to recognize large audio files....

Splitting the audio based on silence

One way to process the audio file is to split it into chunks of constant size. For example, we can take an audio file which is 10 minutes long and split it into 60 chunks each of length 10 seconds. We can then feed these chunks to the API and convert speech to text by concatenating the results of all these chunks. This method is inaccurate. Splitting the audio file into chunks of constant size might interrupt sentences in between and we might lose some important words in the process. This is because the audio file might end before a word is completely spoken and google will not be able to recognize incomplete words....

Libraries required

Pydub: sudo pip3 install pydub Speech recognition: sudo pip3 install SpeechRecognition...