|
|
As the author of the paper, the researcher is undoubtedly responsible for the opinions and content of the paper, but how can AI be responsible for the content of the paper? If the content generated by AI is fallacious, inappropriate, fake, or even plagiarized, how should we be held accountable? Considering the common problems caused by AI cheating, everyone from OpenAI officials to academic journal publishers and grassroots developers are currently studying how to tell whether the author of an article is a human or a machine?
Currently, OpenAI is developing corresponding AI detection tools. OpenAI guest Indonesia Telegram Number Data researcher Scott Aaronson said in a speech at the University of Texas that they are combating cheating by watermarking AI-generated content . This technology will adjust ChatGPT's word generation rules to create pseudo-random specific words at specific locations in the generated content. It will be difficult for readers to detect, but just like a secret code, people who hold the key can easily judge.

Is this content generated by ChatGPT? Springer Nature, the publisher of Nature, is also developing technology that can detect LLM. Not long ago, Edward Tian, a -year-old Chinese student at Princeton University, developed an application specifically designed to find faults with ChatGPT, GPTZero, which can detect the perplexity and suddenness of text. Burstiness) are two indicators to determine whether the content is human-created or machine-generated.
|
|