Ethics

Rehan N -

Artificial intelligence is growing in nearly every sector from healthcare and finance to more creative industries at an unprecedented rate. Naturally, with this growth, AI ethics has become even more relevant which raises several questions. Who ensures that AI remains fair? Companies? The Government? How do they prevent AI from reinforcing inequalities that exist in the real-world? 

In this post, I’ll discuss the ethical pillars of AI and then tie it into my research on AI detectors. Indeed, based on existing literature, AI ethics can be summed up into these several principles: 

  1. Fairness — AI should not be targeting whether it’s intentionally or unintentionally certain groups in a way that causes a disadvantage to them. 
  2. Transparency — AI companies should make it understandable to users on how their services work and make issues like ‘hallucinations’ clear. 
  3. Privacy — AI tools should not misuse or even store personal data without consent from a user. 
  4. Safety — AI should have boundaries in place so that they don’t have unintended consequences. 
  5. Human Oversight — AI should support human decision making in environments that are high stake like law enforcement or healthcare, if used at all, rather than replacing a human blindly. 

AI detection tools like GPTZero, Turnitin, or QuillBot claim that they can accurately determine when a text has been AI generated. However, my research and findings from other studies find that the misclassification rate is much higher for human writing from ESL (English as a Second Language) writers. In my prospective study, the metrics measured are false positive/negative rate and accuracy rate. In AI ethics, the largest concern would be the false positive rate when human-written text is classified as AI because a student who did honest work could face academic penalties without doing anything wrong. Moreover, some companies are now using AI detectors to check job applications or emails for AI, which could disproportionately affect ESL speakers when professional credibility is being evaluated. 

As AI continues to involve, ensuring ethical development is more crucial than it ever has been.

More Posts

Comments:

All viewpoints are welcome but profane, threatening, disrespectful, or harassing comments will not be tolerated and are subject to moderation up to, and including, full deletion.

    heet_d
    Hey Rehan, your topic is very interesting! But given that GPTZero and some others have already implemented several measures to mitigate bias against ESL writers, do you believe human oversight is enough as a solution, or are deeper structural changes needed in how these AI tools are built to ensure ethical development?
    rehan_n
    Hey Heet, thanks for your comment! While human oversight is definitely necessary to catch biases and errors that AI detectors might introduce, deeper structural changes in how these tools are trained and evaluated are crucial for ethical development. My research shows that AI detectors often rely on perplexity-based metrics, which inherently disadvantage ESL writers. To make real progress, we need models trained on more diverse linguistic datasets and greater transparency in how detection tools assess text. Without these structural improvements, human oversight alone won’t be enough to ensure fairness.

Leave a Reply

Your email address will not be published. Required fields are marked *