A new tool created by ChatGPT’s creator has been released to assist teachers in determining if a text was produced by a human or machine.
The new text classifier from OpenAI is in response to concerns raised in schools and universities that the ability of ChatGPT to produce practically anything on demand may encourage academic dishonesty and impede learning

However, according to Jan Leike, the leader of OpenAI’s alignment team, the technique for identifying text generated by AI is “flawed and “will be erroneous sometimes.”

- ADVERTISEMENT -

“As a result, it shouldn’t be the only factor considered when making judgments,” he said.

Since ChatGPT became available on OpenAI’s website in November as a free application, millions of people have tried it out.

The ease with which students could complete take-home tests and help with other homework has caused some educators to fret, even though many found creative and safe ways to use it.

Some colleges have acted rapidly to revise tests, essay questions, and integrity procedures due to concerns that the technology will be used to cheat and plagiarize.

Public schools in New South Wales, Queensland, Western Australia, and Tasmania have already outlawed ChatGPT.

While acknowledging that “if you’re establishing tests that might be completed simply by drawing on web resources, then you may have a problem,” University of Western Australia’s Julia Powles stated that she believed the fear of cheating was “overblown.”

“Ever since we’ve had the ability to search the web or access material on Wikipedia, people have been able to draw on digital resources,” she said.

The artificial intelligence tool ChatGPT has a range of capabilities from writing essays to translation. (Supplied: ChatGPT)

In a blog post, OpenAI emphasized the limitations of their detection tool but added that in addition to preventing plagiarism, it might also assist in identifying automated disinformation campaigns and other instances where AI has been used to imitate humans.

The program gets better at telling if something was written by a human or an AI the longer a chunk of text is.

The tool classifies text as “extremely unlikely,” “unclear if it is,” “unlikely,” “maybe,” or “likely” AI-generated.

However, much like ChatGPT itself, it is difficult to understand how a result was arrived at.

“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Ms. Leike said.

“At this stage, there’s really not much we can say about how the classifier actually operates.”

The usage of ChatGPT has been outlawed in France by Sciences Po, which has issued a warning to anyone caught using it or other AI tools to produce written or spoken output.

In response to the criticism, OpenAI stated that it had been developing new educational guidelines for several weeks.

According to OpenAI policy researcher Lama Ahmad, “Like many other technologies, it’s possible that one district will determine that it’s not fit for usage in their classrooms.

“We don’t really press them in a particular direction. We simply want to arm them with the knowledge they require to choose the best course of action for themselves.”

Recently, OpenAI executives, including CEO Sam Altman, met in California with Jean-Noel Barrot, France’s minister for the digital economy.

He expressed his optimism about the technology at the world economic forum in Switzerland after the meeting.

There is cause for fear if you are a law faculty member, he continued, as ChatGPT, among other tools, will be able to offer exams that are really outstanding.

If you are in the economics faculty, you’re good to go because ChatGPT will struggle to locate or provide what is required of you in a graduate-level economics faculty.

According to him, it will become more crucial for users to comprehend the fundamentals of how these systems function so they are aware of any potential biases.

Leave A Reply

Exit mobile version