Discuz! Board

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 114|回复: 0

Currently, OpenAI is developing

[复制链接]

1

主题

1

帖子

7

积分

新手上路

Rank: 1

积分
7
发表于 2024-3-11 11:35:01 | 显示全部楼层 |阅读模式
As the author of the paper, the researcher is undoubtedly responsible for the opinions and content of the paper, but how can AI be responsible for the content of the paper? If the content generated by AI is fallacious, inappropriate, fake, or even plagiarized, how should we be held accountable? Considering the common problems caused by AI cheating, everyone from OpenAI officials to academic journal publishers and grassroots developers are currently studying how to tell whether the author of an article is a human or a machine?


Currently, OpenAI is developing corresponding AI detection tools. OpenAI guest Indonesia Telegram Number Data researcher Scott Aaronson said in a speech at the University of Texas that they are combating cheating by watermarking AI-generated content . This technology will adjust ChatGPT's word generation rules to create pseudo-random specific words at specific locations in the generated content. It will be difficult for readers to detect, but just like a secret code, people who hold the key can easily judge.







Is this content generated by ChatGPT? Springer Nature, the publisher of Nature, is also developing technology that can detect LLM. Not long ago, Edward Tian, ​​a -year-old Chinese student at Princeton University, developed an application specifically designed to find faults with ChatGPT, GPTZero, which can detect the perplexity and suddenness of text. Burstiness) are two indicators to determine whether the content is human-created or machine-generated.

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Archiver|手机版|小黑屋|DiscuzX

GMT+8, 2026-2-28 11:06 , Processed in 0.039500 second(s), 18 queries .

Powered by Discuz! DISCUZ_VERSION

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表