OpenAI, the creators of ChatGPT, has introduced a new AI Text Classifier tool aimed at addressing its reputation as a source for academic dishonesty. This tool is designed to help educators determine if a homework assignment was written by a student or an artificial intelligence. OpenAI has noted that this tool is not foolproof and may sometimes provide inaccurate results.

Worries of Academic Dishonesty

The launch of ChatGPT on OpenAI's website in November 2020 led to millions of users experimenting with the tool, including students, causing concerns of academic dishonesty. To combat these fears, some school districts, such as New York City and Los Angeles, have banned the use of ChatGPT in classrooms and on school devices.

School Districts Embrace AI Technology

However, some school districts, like Seattle Public Schools, have opened access to educators who want to utilize the tool as a teaching resource. The district is exploring the possibility of using ChatGPT in the classroom to help train students to become critical thinkers and provide assistance with assignments.

Debates on Responsible Use of AI

Higher education institutions worldwide have started debates on the responsible use of AI technology, with Sciences Po in France prohibiting its use and warning of consequences for students caught using it. OpenAI has stated that it is working on new guidelines to help educators make informed decisions about the use of technology. OpenAI policy researcher Lama Ahmad has noted that the company does not want to push educators one way or another, but rather provide them with the information they need to make the right decisions for their respective institutions.

AI Text Classifier Limitations

OpenAI has emphasized the limitations of its detection tool in a blog post and noted that it could also be used to detect automated disinformation campaigns and other AI misuse that mimics human behavior.

Share this post