Students, beware! OpenAI makes cheating with ChatGPT impossible.

Students, beware! OpenAI makes cheating with ChatGPT impossible.

Coming from a family of school teachers, how students can use ChatGPT to cheat on homework has always been a source of concern for me; there are tools available to detect AI text generation, but their reliability is tenuous.

That's probably why OpenAI secretly updated its May blog post (discovered by TechCrunch) to welcome the fact that the company has developed a "text watermarking method" and is ready to go. However, there are concerns that are stopping the team from releasing it.

So what are the methods that OpenAI has been working on? There are several, but the company details two:

Another method OpenAI has explored is the use of classifiers. You will see them used regularly in machine learning when email apps automatically put messages in the spam folder or classify important emails into the main inbox. This could be used as a hidden classification that the essay was generated by AI. [The Wall Street Journal] reports that these tools are essentially ready to go, and OpenAI is sitting on top of them. So what is lagging behind? Quite simply, these tools are not completely foolproof and could do more harm than good.

I mentioned that watermarking is effective against localized tampering, but not so much against "globalized tampering". Certain cheeky methods, such as "using a translation system, paraphrasing in a different generative model, or inserting special characters between every word and asking the model to remove them," can circumvent watermarking.

On the other hand, another problem is the disproportionate impact on some groups: while AI can be a "useful writing tool for non-native English speakers," one does not want to stigmatize the use of AI in such a situation.

Categories