The most difficult thing while using Microsoft Azure Speech SDK was that when recognizing voice through microphone and playing it in the corresponding language, the voice that came in later was translated into the corresponding language and played first, and sometimes the two translated voices were played overlapping.
To solve this problem, I had to accurately capture the point in time when the voice was played and finished, and store the voices one by one through a queue, and then output the voice that came in first and then output the voice that came in later.
I think other developers, including myself, had a hard time solving this problem.
If you follow the code example shared below, you can save time.
If you have any problems while referring to my code, please contact me without hesitation.
I hope for your selfless assistance.
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (1)
I saved a lot of time by looking at your code examples. Thank you very much.