OpenAI has the tech to watermark ChatGPT text—it just won’t release it

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Enlarge (credit: Getty Images)

According to The Wall Street Journal, there's internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.

To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company's internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time. It's important to note that the watermark would be a pattern in the text itself, meaning it would be preserved if the user copies and pastes the text or even if they make modest edits to it.

Some OpenAI employees have campaigned for the tool's release, but others believe that would be the wrong move, citing a few specific problems.

Read 8 remaining paragraphs | Comments



https://ift.tt/xoC17gr

Comments

Popular Posts