Week #2 — The Dataset, Contd.

Sachin C -

Hello!

Now that I’ve begun the process of gathering and preprocessing my dataset, my logical next step is to begin figuring out how to train my AI model. This involves selecting the right architecture, fine-tuning hyperparameters, and ensuring that the model performs efficiently on limited hardware.

This week:

  • Choosing the Model Architecture: Since I’m working with resource-constrained microcontrollers, I need a model that is both lightweight and effective. I’m currently experimenting with MobileNet, which is known for its efficiency in image recognition tasks. This is an architecture popularized by Google for the purpose I need, which is image recognition.
  • Training the Model: Using TensorFlow and PyTorch, I’m running initial training sessions on my dataset. The goal is to optimize for both accuracy and computational efficiency.
  • Testing on Simulated Hardware: Before deploying onto a physical microcontroller, I’m using software-based simulations to test how well the model performs under restricted conditions. This is to make sure that the model doesn’t overwhelm my actual hardware, so I don’t spend unnecessary costs replacing overstressed power supply units, etc.

Challenges and Adjustments

One major challenge I’m encountering is balancing accuracy with efficiency. A complex model might be more accurate but too computationally demanding for a microcontroller. I am additionally experimenting with techniques such as pruning and quantization, which reduce the size of the model while maintaining performance. This is on the side of the dataset, which must happen before I develop the model.

 

Hope to see you next week.

More Posts

Leave a Reply

Your email address will not be published. Required fields are marked *