Conclusion: The ICON Journey
Anshul B -
As the first phase of the ICON project comes to a close, I want to take a moment to reflect on everything we’ve accomplished and thank everyone who has followed along through these blog posts.
Over the past several weeks, we took ICON from a simple idea (a self-driving car around the GCU campus) to a fully functioning prototype capable of object detection, depth estimation, and autonomous navigation. From building our CNN model to implementing the Depth-from-Defocus (DFD) algorithm and finally integrating all of our hardware components with the Raspberry Pi, we were able to bring our idea to life.
In Phase 1, our focus was purely on object avoidance—a critical base for any autonomous system. We successfully trained a model that could recognize classroom objects, calculate distances using camera blur, and create steering commands that allowed our car to navigate spaces like college classrooms. Watching ICON move on its own for the first time was an interesting experience—it proved that all the hard work was worth it.
Of course, it wasn’t all smooth sailing. We hit roadblocks with TensorFlow, hardware malfunctions, code bugs, camera connection issues, and more. But every time we failed, we learned something new. And that’s my final piece of advice to you all: always get up when you fall. What matters most is your ability to keep pushing forward, even when the path isn’t clear. Thank you again (every single one of you) for sticking with me on this journey. ICON was a success, and this is only the beginning.
Until next time.
Comments:
All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.