Bugging Out

Elena C -

Hey and welcome back again! I went to run the code with the transcripts, however, while running the code I found a major bug in the system. A bug is when there is one part of code that is either entered in or designed incorrectly. After admitting in a specific request – for example: “response = self._request(url, host=host, type_=type_, data=data)” an error code kept popping up. This meant I then had to debug the code. Debugging a system is when a developer isolates separate parts of the code and aims to work around or fix the issues presented by them. I found that the issue was the code would only compute for single requests when I had originally tested it, but when met with a large file, such as the interview transcripts, it could not loop the code. I then thought I would have to rewrite that section of code, however soon realized after watching many videos on how to work python, that I could simply loop it at the end, which I did with the code: (initialization; condition; increment) {//code execution}. 

 

So now everything is fixed and the code can run right? Well, not exactly. I then tried to run it again and the code was still presenting issues with applying sentiment to every line of code, as well as contrasting the difference between different types of sentiment. For this problem I did actually have to go through and rewrite many lines of code because there were many bugs throughout. However, after about a week of rewriting the code I finally went through and debugged the whole process. 

 

Now it seems to be working fine, and should compute the transcripts by the end of today. More interviews are lined up for later today and the rest of this week so we will have even more data to work with. Meet me next time on the following blog post to see the results!

More Posts

Comments:

All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.

    sai_g
    The debugging process sounds really rough! Did you have any strategies or things that helped you identify and fix these bugs with your sentiment analysis? Also, you mentioned that your code would be capable of differentiating between sentiments, so how are you planning to check the validity of these sentiments?
    elena_c
    Yeah man it has been tiring. But yes! I did have a few strategies that helped me out. One method I tried was to reduce model complexity by coding a more basic model and then moving onto a more deep learning aspect of sentiment. For example the phrase "This is amazing!" should be inherently positive with no sarcastic context, and if the basic model could not detect that, then clearly there was something wrong at the fundamental level rather than at the more complex level. And that's a great question, in my next blog post I was actually going to go into more depth on checking validity of sentiment but for a brief overview I essentially check each transcripts performance through a validity test. I split the data into three sections: training (which is training the basic model of code), validation (which is done by careful evaluation of the data myself) , and test sets (which check validity through ML models). ML models show logarithmic regression models that check which words contribute most to the sentiment classification. That way if some words dominate I can check and make sure those words are falling under the correct sentiment.

Leave a Reply

Your email address will not be published. Required fields are marked *