I’m creating a text-to-speech service with the help of IBM Watson API. Using the following code segment I was able to download the converted .wav file to my server.
textToSpeech .synthesize(synthesizeParams) .then((response) => { return textToSpeech.repairWavHeaderStream(response.result) }) .then((buffer) => { fs.writeFileSync(buffer, 'hello_world.wav') }) .catch((err) => { console.log('error:', err) })
But I do not want to store audio files whenever a text has been converted. How can I send the buffer directly to the users to download?
Advertisement
Answer
An approach could be to set-up an endpoint that takes the text to be converted to speech that calls the IBM Wastson api and within then
have the buffer
be sent directly to the user on the res
object. (I see you tagged express-js)
Take a look at these two articles and this YT video (on streaming video) for suggestions/artifacts on how to approach,