Week 10: ICON Comes to Life

Anshul B -

Welcome back to the final update of Phase 1 of ICON! My name is Anshul, and this week has been very exciting—we finally got ICON up and running as a fully operating system. After months of research, coding, hardware setup, and troubleshooting, we saw our work in action.

All the hardware and software components—from the CNN, DFD algorithm, steering commands, Raspberry Pi, webcam, and motor driver—have now been fully integrated on the small-scale robotic car. With our GitHub pipeline in place, the system runs automatically whenever new code is pushed, making the process very efficient. Our webcam captures defocused images for depth detection, the CNN classifies objects, and the Raspberry Pi uses this information to send accurate steering commands through the motor driver. And yes—it all works consistently.

The major activity at our site this week was testing ICON in real environments. We ran multiple trials inside multiple college classrooms, observing how ICON responded to various obstacles like chairs, tables, backpacks, and more. The car successfully navigated around objects. These trials proved that the system can handle cluttered environments with accuracy.

This milestone marks the end of Phase 1 of the ICON project: object avoidance. From here, we plan to improve real-time learning, enhance steering logic, and possibly expand to mobile or cloud-based control, which leads into Phase 2: Optimization.

Thank you for reading my week 10 blog post. Stay tuned for the final blog post coming up!

More Posts

Leave a Reply

Your email address will not be published. Required fields are marked *