Introduction

Rehan N -

Hello everyone! I’m Rehan Nagabandi and I’m a senior at BASIS Peoria. I’m excited to welcome you to the introductory post of my senior project blog. So, let’s get right into it! 

Lately, I’ve found myself drawn into artificial intelligence, particularly “generative AI,” tools like ChatGPT which can generate everything from prose to code with just a few basic instructions. However, the more I explored, the more I began to question some of the implications behind such tools: how can we tell the difference between human and AI text and should we even try? 

That’s where AI detectors come in. These programs promise anyone who wants to know if some content was generated by AI accurate results, which could provide a safety net in professional settings where it matters. But then, I uncovered a complication — it’s not always clear how fair these tools really are as AI detection services don’t always provide much context about these metrics in regards to their tools since that could impact sales. 

As I began researching, I found that while some studies showcase that in uncontrolled conditions, the detectors can be reasonably effective, other studies concerningly reported higher rates of error, and this is especially true for more unconventional writing styles. That had me wondering what would happen if someone’s writing style did not fit the ‘normal’ human text data the detectors are trained on. 

If AI detectors are built based on assumptions of what human text should look like, then there may be unfair targeting to those with writing that does not fit those assumptions; specifically, I’ll be investigating this issue for ESL (English as a Second Language) speakers. This goes further than a technical problem but becomes an ethical issue on how AI detectors are employed. 

In the coming weeks, I’ll further unpack my research journey and I hope you’ll join me there as I dig deeper!

More Posts

Comments:

All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.

    muna_n
    This sounds like a fascinating and timely project, Rehan! Have you come across any specific cases where ESL writers were unfairly flagged by AI detectors? Looking forward to seeing what your research uncovers!
    Anonymous
    Thank you for your interest in my project, and for highlighting this important issue. Yes, there have been notable instances where AI detectors have unjustly flagged work by ESL writers. A study from Stanford University revealed that seven AI detectors misclassified essays by non-native English speakers as AI-generated 61% of the time, while accurately assessing native English speakers' essays. This discrepancy arises because AI detectors often rely on "perplexity" metrics, which evaluate the predictability and complexity of text. Non-native speakers might use simpler vocabulary and sentence structures, leading detectors to mistakenly identify their authentic work as AI-generated. Such biases have significant implications, especially in educational settings where students have faced false accusations of cheating due to these inaccuracies. Addressing these biases is crucial to ensure fairness and accuracy in AI detection tools.
    rehan_n
    Whoopsie, my account must've been signed out!

Leave a Reply

Your email address will not be published. Required fields are marked *