Week 9: Multiclass LR (intuitive? version)
Johnny Y -
This week, I continued working on the multiclass LR algorithm for the SentBERT + ARIMA data. It has come to my attention that a more intuitive explanation of LR might be best (thanks Makeen). Here goes:
The AI is confronted with the task of making a yes/no prediction based on several bits of information. It doesn’t know how much weight to give each bit of information yet, so it weighs them all equally to start. It then computes a score for the prediction (think of it like how confident the AI is that the answer is yes). If the score is above 0.5 (more likely than not), then the AI predicts yes; otherwise, it predicts no. For each data sample, the AI makes a prediction. If the AI’s prediction is correct, it assumes its weights are solid and doesn’t change them. If not, the AI adjusts them by an amount proportional to its confidence – if it thought that the probability of the answer being yes was either really high or really low, then it will adjust the weights more.
For multiclass LR, because there’s no clear threshold for the score (up/no-change/down isn’t necessarily >0.67/>0.33/>0), the AI simply computes a score for each choice. In this case, the AI would compute a score for up, a score for no-change, and a score for down. The AI then uses a formula to convert the scores to probabilities that sum to 1, and picks the choice with the highest associated probability. Again, if the prediction is correct, the AI assumes the weights are solid and doesn’t change them. If not, the AI will adjust them by an amount proportional to its confidence.
Hope this helps!
Comments:
All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.