OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Is it possible to add an audio stream to an already existing audio and video stream that is being recorded by MediaRecorder?

  • Thread starter Thread starter Terraflow
  • Start date Start date
T

Terraflow

Guest
I am building an interview application where I am interrogated by an AI and I have to answer questions. The interview is recorded using MediaRecorder. The problem is that the audio that is generated by Whisper AI is not recorded. I tried combining the streams but then the recording stops once the Whisper audio is added to the stream.

The flow is like this:

  1. The recording starts as soon as the interview starts
  2. Whisper audio is played with the first question
  3. I answer the question
  4. 2nd question etc.

I want all the audio recorded (from my mic and the Whisper audio).

Currently my working code looks like this (without combining the streams):

Code:
  let chunkIndex = 0
  const handleInterviewStart = async() => {
    setInterviewStarted(true)

    try {
      const stream = await navigator.mediaDevices.getUserMedia({
        video: { deviceId: selectedVideoDeviceId ? { exact: selectedVideoDeviceId } : undefined },
        audio: { deviceId: selectedAudioDeviceId ? { exact: selectedAudioDeviceId } : undefined }
      });

      const mediaRecorder = new MediaRecorder(stream);
      mediaRecorder.ondataavailable = (event) => {
          uploadChunk(event.data, chunkIndex, false)
          chunkIndex += 1
      };
      mediaRecorder.start(5000);
      setInterviewRecorder(mediaRecorder);
    } catch (error) {
        console.error('Error starting interview recording:', error);
    }

    console.log('INIT MESSAGE: ', initMessage)
    await transcribeAndComplete(true)
  }

Code:
const playSpeech = async (response) => {
    try {
      // Create a new audio element
      const audio = new Audio()

      // Set the audio source to the API route URL with text as a query parameter
      audio.src = URL.createObjectURL(await response.blob())
      
      audio.play()
    } catch (error) {
       console.error("Error playing speech:", error)
    }
 }

Anyone knows how to achieve this?

<p>I am building an interview application where I am interrogated by an AI and I have to answer questions. The interview is recorded using MediaRecorder. The problem is that the audio that is generated by Whisper AI is not recorded. I tried combining the streams but then the recording stops once the Whisper audio is added to the stream.</p>
<p>The flow is like this:</p>
<ol>
<li>The recording starts as soon as the interview starts</li>
<li>Whisper audio is played with the first question</li>
<li>I answer the question</li>
<li>2nd question etc.</li>
</ol>
<p>I want all the audio recorded (from my mic and the Whisper audio).</p>
<p>Currently my working code looks like this (without combining the streams):</p>
<pre><code> let chunkIndex = 0
const handleInterviewStart = async() => {
setInterviewStarted(true)

try {
const stream = await navigator.mediaDevices.getUserMedia({
video: { deviceId: selectedVideoDeviceId ? { exact: selectedVideoDeviceId } : undefined },
audio: { deviceId: selectedAudioDeviceId ? { exact: selectedAudioDeviceId } : undefined }
});

const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (event) => {
uploadChunk(event.data, chunkIndex, false)
chunkIndex += 1
};
mediaRecorder.start(5000);
setInterviewRecorder(mediaRecorder);
} catch (error) {
console.error('Error starting interview recording:', error);
}

console.log('INIT MESSAGE: ', initMessage)
await transcribeAndComplete(true)
}
</code></pre>
<pre><code>const playSpeech = async (response) => {
try {
// Create a new audio element
const audio = new Audio()

// Set the audio source to the API route URL with text as a query parameter
audio.src = URL.createObjectURL(await response.blob())

audio.play()
} catch (error) {
console.error("Error playing speech:", error)
}
}
</code></pre>
<p>Anyone knows how to achieve this?</p>
 

Online statistics

Members online
0
Guests online
5
Total visitors
5
Top