Code Finished.
Hey everyone, I finished finalizing the requirements for the audio code. Here’s how it works.
Step 1: The code will ask you to say, “Hey, how’s it going?” or something similar to get an RMS which is essentially a numerical value that shows how loud the tester is speaking.
Step 2: Once that’s done, the code will prompt you to speak, and listen to what you say and what volume you say it at relative to the calibrated RMS value.
Step 3: Once all the values have been gathered, the code will pass all that info to the OpenAI API and return one of 7 basic emotions and why it thought so.
Currently, I am working on trials for the drawing and audio code to see how accurate the AI models are.