Introduction
Hello Everybody,
My name is Aarsh, and I am currently working on a Senior Research Project in Explainable AIs.
My interest in AI, particularly in autonomous systems, stems from a fascination with how intelligent systems can solve complex real-world problems. While I initially aimed to pursue autonomous systems research in college, circumstances prevented me from fully exploring this path. However, my passion for AI and its potential to transform industries, especially in areas like robotics, self-driving cars, and decision-making systems, has only grown stronger.
In my senior research project, I aim to replicate and build upon the findings of the paper titled “Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning” (arXiv:2203.16464). This study introduces a novel framework that employs Adversarial Inverse Reinforcement Learning to provide global explanations for decisions made by reinforcement learning models. We are trying to figure out why AI makes the decisions it does. In the research paper, they used AI to summarize a block of text, however, they tried to figure out what the AI’s true goal of the summarization was. Is it making it as short as possible, or paraphrasing every sentence?
My objectives include implementing this framework to verify its effectiveness in enhancing the interpretability of deep reinforcement learning models. Additionally, I plan to explore potential improvements or adaptations to this approach, aiming to contribute to the development of more transparent and understandable AI systems.
Thank you so much for reading. Please feel free to leave any comments questions or concerns you may have regarding the project.
Comments:
All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.