Ethan w's Senior Project Blog
| 
						
						 | 
					
						 Project Title: Unlocking 3D AI: Teaching AI to Better Understand 3D Spaces with Limited Data BASIS Advisor: Mrs. Shelby Kilmer-Webb Internship Location: ASU Brickyard Engineering Onsite Mentor: Dr, Yezgiw Yang  | 
				
Project Abstract
Autonomous vehicles that only use camera data face challenges in estimating their 3D spatial position as they lack precise measurements from radar and LiDAR sensors. They rely on machine learning techniques to infer this information from 2D data but this leads to potential errors due to data imbalance. Data imbalance is when certain types of data are overrepresented while others are underrepresented. This can cause machine learning models to perform well in familiar conditions such as clear weather and well-lit roads but struggle in less common, high-risk scenarios like fog and complex environments. It is particularly challenging in 3D applications because they often involve at least 6 degrees of freedom compared to text and images. Deep imbalance regression is a specialized machine learning approach designed to address data imbalance when dealing with continuous labels that smoothen the label distribution to help stabilize the learning. The goal of this project is to apply deep imbalanced regression methods to solve the problem of imbalanced data that is used for a machine learning model that predicts the 3D position of autonomous vehicles. This could potentially improve the accuracy and reliability of 3D spatial understanding which would make autonomous navigation more effective and safe in the real world. Various models will be tested and compared to see which techniques most effectively improve performance for underrepresented scenes.
From Baselines to Breakthroughs: My Final Thoughts on 3D LDS and FDS
Hi everyone! Welcome to the final week of my blog! Last week, I worked on a lot of testing of the models and also began working on my final presentation. I delivered a practice presentation where I received feedback and worked on updating my presentation. I also ran into some interesting results while testing.... Read More
Finally Seeing Results
Hi everyone! Welcome back to week 9 of my blog. Last week, I began working on my final presentation and had one practice presentation. Despite some initial hiccups, I received some valuable feedback. I also resolved file access issues and began working on running real tests which paved the way for this week’s progress. This... Read More
Smoothing Techniques, Presentation Prep, and a Philadelphia Detour
Hi everyone! Welcome back to week 8 of my blog. Last week was all about adapting Label Distribution Smoothing and Feature Distribution Smoothing to work in 3 dimensions. I was able to code up an implementation that used a Gaussian kernel to smooth the label distribution that had been discretized into a grid of bins.... Read More
Breaking Through the 3D Barrier—Finally!
Hi everyone! Welcome back to week 7 of my blog. Last week, I was stuck in a frustrating loop with my parallel processing issues on Windows. I tried to edit the code to fix this issue but nothing seemed to work so I decided a Linux VM would completely bypass the windows file handling limitations.... Read More
My Model Is Stuck in a Loop—And So Am I
Hi everyone! Welcome back to week 6 of my blog. Last week, I took a step back from experimentation and caught up on reading while my lab focused on a major conference deadline. I read Deep Imbalanced Regression via Hierarchical Classification Adjustment, which introduced a multi-level classification structure as an alternative to smoothing approaches. I... Read More
Slow and Steady – Catching Up on Reading & Debugging
Hi everyone! Welcome back to week 5 of my blog. This week was relatively slow on the experimental side as my lab and mentor prepared for a big conference deadline, so my troubleshooting efforts were somewhat put on hold. However, yesterday, he gave me the advice that since the parallel processes are loading and updating... Read More
From 1D to 2D: Scaling Imbalanced Regression (Once My Computer Lets Me)
Hi everyone! Welcome back to week 4 of my blog. Last week we looked into Rank N Contrast, a method that restructures the learning process to optimize how a model can understand continuous relationships. I also went into the issues I was facing with extended training times. This week, I tried looking into enabling more... Read More
Is My GPU Even Working? Why My Model Trained So Slowly—And How I’m Fixing It
Hi everyone and welcome to week 3 of my blog! Last week, I explored deeper into why the models trained on my computer provided interesting results when using the imbalance regression techniques Feature Distribution Smoothing and Label Distribution Smoothing. The main takeaway? Training time might have been the limited factor. But this raised a broader... Read More
The AI Experiment I Didn’t Fully Finish—But Probably Should Have
Hi everyone and welcome to week 2 of my blog! Last week, I trained 2 models using the IMDB-WIKI age estimation example in [2]. The first model was a baseline that didn't use any imbalance regression techniques and the second incorporated Feature Distribution Smoothing (FDS) and Label Distribution Smoothing (LDS), 2 imbalance regression techniques. Those... Read More
What Would You Do if Your Self-Driving Car Faced a Pitch-Black Tunnel?
As AI breakthroughs continue to dominate headlines—providing us with everything from photorealistic images, to homework-destroying chatbots—it’s easy to forget that these technologies are only as strong as the data they are based on, and all too often, that data is far from complete. Nowhere is this imbalance more critical than in self-driving vehicles, where an... Read More
