From Baselines to Breakthroughs: My Final Thoughts on 3D LDS and FDS
Ethan w -
Hi everyone! Welcome to the final week of my blog!
Last week, I worked on a lot of testing of the models and also began working on my final presentation. I delivered a practice presentation where I received feedback and worked on updating my presentation. I also ran into some interesting results while testing.
This week, I completed both my final presentation powerpoint as well as my final paper. I also gave another practice presentation where I received even more feedback on how to make it more understandable by a general audience. I will be giving my official final presentation tomorrow!
On the more technical side, I continued intensive testing to get some better results for my final paper. I conducted many runs of all of the models I had planned before (Baseline, LDS, FDS, LDS+FDS) and was able to get significantly better results. Unlike last week, FDS was able to see an improvement in MSE by about 25% in the underrepresented regions but that also improved even more later. I also came across the realization that I should include a second baseline for comparison: baseline model with cost-sensitive learning. Logically, LDS only works by improving cost-sensitive learning so a better baseline for comparison would be cost-sensitive learning without LDS so I added this in as another test and got some surprising results. In pretty much every case, cost-sensitive learning performed worse than the baseline model without. I assume it is because of the nature of the continuous task which is why LDS is needed to help improve cost-sensitive learning so this was a great example of why LDS helps in continuous tasks.
In the end, I was able to see LDS and FDS significantly improve performance throughout. LDS saw a 33.2% improvement overall and on average a 37% improvement in the underrepresented scenarios. FDS saw a 41.7% improvement overall and on average a 43.6% improvement in the underrepresented scenarios. LDS+FDS saw a 34% improvement overall and on average a 37% improvement in underrepresented scenarios. The fact that LDS+FDS consistently performed slightly better than LDS but slightly worse than FDS alone is very interesting. It suggests that sometimes, the two approaches do not work well with each other or there was a problem with our new implementation when they are combined. Both LDS and FDS showing significant improvement on their own is still a considerable success.
Since this is my final blog, I would like to share some final thoughts. This project has been super rewarding to take on. From debugging the most annoying random problems here and there to finally seeing 3D LDS and FDS show improvement on models, it was a great hands-on experience to take on this project. In the future, there’s a lot more that could still be done to extend this project but for now, I’m pretty happy with it.
Thank you all so much for reading!