Trial and Error – 3/21/25

Sophia L -

Hello everyone!! I hope you all had a restful spring break. I was able to work on my project a little bit in my downtime, so I can’t wait to share some updates with you all today.

Last time I gave updates, I was filtering through my data entries to prepare to feed the AI model and explaining the background behind code and network selection. I wanted to share more today about how the process has been unfolding.

The hardest part about the code is the trial and error. The way that my model is coded, after the base code has been entered, I need to continuously adjust the code so that my model is able to predict the dropout rates in the most accurate way possible. This is how I initially devised my plan for splitting up the data that already exits. The actual testing portion of the coding of the model is split into feeding the model data, training the model, and then predicting.

For my model I decided to split my feeding portion into 67% of the data and 33% of the data for tuning. To break this into simpler terms, I plan feed the coded AI model 67% of the data for it to learn the patterns. Then it will predict the other 33% of the data, so I can identify the accuracy of the model with a direct basis of comparison. Using that 33% of the data, the actual AI model cannot see how the dropout rate changed throughout that time period, but I can, so I use that 33% of data to tune the model. Then using the correctly tuned version, from there I am able to identify that the model is ready to predict into the future. After it learns the patterns of the data, it will be able to identify the most significant predictors of dropout based on the data and be able to predict dropout rates 10 years into the future.

The tuning is the most time consuming part, and that is the part I am stuck on right now. I do not know how long the actual tuning will take, but I do know that it is very important that I make it as accurate as possible before I allow the model to predict into the future!

I can’t wait to share more in the upcoming weeks!

Talk to you soon,

Sophia Lin

 

More Posts

Comments:

All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.

    ethan_f
    Hi Sophia! Considering your model's goal of predicting dropout rates a decade into the future, what methods are you exploring to account for unforeseen societal or educational shifts, ensuring the long-term relevance and accuracy of your predictions beyond the constraints of historical data?
    Braydon
    Hi Sophia, your approach to model tuning with a 67/33 split highlights the rigor needed for reliable AI predictions! Given the time-intensive nature of hyperparameter optimization, are you considering techniques like automated grid search or cross-validation to streamline the tuning process while maintaining accuracy?
    jana_e
    Hey Sophia! This tuning sounds super meticulous, but it'll totally be worth it in the end! Is there a specific reason you chose that 67/33 split?

Leave a Reply to jana_e Cancel reply

Your email address will not be published. Required fields are marked *