Artificial intelligence powerhouse OpenAI has discreetly turned on its AI detection software, citing low accuracy rates.
An artificial intelligence classifier developed at OpenAI was first launched on January 31, with the goal of helping users such as teachers and professors distinguish human-written text from AI-generated text.
However, according to the original blog post which announced the launch of the tool, the AI classifier has been shut down since July 20:
“As of July 20, 2023, the AI classifier is no longer available due to low accuracy rates.”
The link to the tool is no longer functional, with a note offering only a simple rationale for why the tool was shut down. However, the company explained that it is looking for new, more effective ways to identify AI-generated content.
“We are working to incorporate feedback and are currently investigating more effective text provenance techniques, and are committed to developing and deploying mechanisms to allow users to understand whether audio or visual content is AI-generated,” the note said.
OpenAI has made it clear from the start that the detection tool is prone to errors and cannot be considered “fully reliable”.
The company said limitations of its AI detection tool include that it is “highly inaccurate” when validating text of less than 1,000 characters and can “confidently” flag text written by humans as AI-generated.
Related: Apple has its own GPT AI system, but no plans for a public release: Report
The classifier is the latest OpenAI product to come under scrutiny.
On July 18, researchers from Stanford and UC Berkeley published a study that revealed that OpenAI’s flagship ChatGPT product was deteriorating significantly with age.
We evaluated #ChatGPT‘s behavior over time and found substantial differences in its responses to the *same questions* between the June version of GPT4 and GPT3.5 and the March versions. Newer versions have gotten worse in some tasks. with Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6
— James Zou (@james_y_zou) July 19, 2023
The researchers found that over the past few months, ChatGPT-4’s ability to accurately identify prime numbers dropped from 97.6% to just 2.4%. Additionally, both ChatGPT-3.5 and ChatGPT-4 experienced a significant drop in the ability to generate new lines of code.
AI Eye: AI content trained AI going crazy, is Threads the leader in AI data loss?