The new AI text classifier was released today by OpenAI after weeks of discussions in schools and universities about ChatGPT’s ability to write about anything on demand and thus could fuel academic lying and undermine academic learning.
OpenAI has already warned that its new tool, like others already out there, is not bug-proof. Jan Lake, the head of the OpenAI team responsible for securing its systems, warned that the way AI typed text is detected is “imperfect and can make mistakes.”
“Therefore, one should not rely solely on him when making decisions,” Laiki warned.
Teenagers and college students are among the millions of people who have started experimenting with ChatGPT after it was launched on November 30 as a free app on the OpenAI website. While some have found a way to use it creatively and without causing harm, the ease with which it answers homework questions and helps with other tasks has sowed panic among some teachers.
As schools open for the new school year, major public school districts like New York and Los Angeles have begun banning their use in classrooms and school equipment.
Tim Robinson, a district spokesman, said the Seattle Public School District blocked ChatGPT on all school equipment, but later allowed access to educators who wanted it as an educational tool.
“We can’t ignore that,” said Robinson.
The school district is discussing the possibility of expanding the use of ChatGPT in the classroom, allowing teachers to use it to train students to be better critical thinkers and students to use it as a “personal tutor” or to help generate ideas while they do so. Robinson said schoolwork.
School districts across the US say they’re seeing a rapid development of ChatGPT conversations.
“The initial reaction was, ‘Oh my God! How are we going to stop the avalanche of fraud that will happen with ChatGPT? But he said there was now a growing understanding that “this is the future” and that blocking it was not the answer.
“I think it would be naive if we weren’t aware of the risks this tool poses, but we would also disappoint our students if we prevented them from using it, given all its potential power,” said Page, who acknowledges that interruptions like himself can come to unlocking ChatGPT, especially when they are Copyright detection tool available.
OpenAI stressed the limitations of its detection tool in a letter posted today on its blog, but added that in addition to detecting plagiarism, it can also aid in automated campaigns of opinion poisoning and other abuses of AI to imitate humans.
The longer the snippet, the better the tool can detect authorship, whether it is human or artificial intelligence. You type in any piece of text—a college admissions essay or a literature analysis—and the tool will rate it as “highly unlikely, unlikely, unclear whether or not likely” to have been generated by AI.
But, just like ChatGPT itself, which, though trained on a very large amount of books, newspapers, and scripts available online, makes available lies or nonsense, it is not easy to explain how it got a result.
“Essentially, we don’t know what pattern catches the eye or how it works internally,” Lake admitted. “There’s not much we can say at this point about how the classifier will work.”
Higher education institutions in many countries have also begun to discuss the responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, banned its use last week and warned that anyone caught using it in written or oral work could be banned from Sciences Po and other institutions.
In response to the challenge, OpenAI said it has been working for several weeks on defining guidelines to help educators.
“As with many other technologies, a region may decide that it is inappropriate to use it in the classroom,” said OpenAI policy researcher Lama Ahmed. “We are not pressured in any way. We just want to provide the information needed to make the decisions they feel are right.”
This is the rare public spotlight for a startup company. [empresa tecnológica nos primeiros tempos da sua existência]in this case, the San Francisco-based and search-oriented one, which is now backed by billions of dollars from partner Microsoft and is facing growing interest from the public and governments.
France’s Minister of Digital Economy, Jean-Noel Barrot, recently met in California with OpenAI executives, including President Sam Altmann, and a week later said in Davos, at the World Economic Forum, that he was optimistic about the technology.
But Parrott, a former professor at the Massachusetts Institute of Technology (MIT, for its English acronym) and at the Parisian business school HEC, also highlighted that there are difficult ethical issues to contend with.
“If you’re in law school, there’s room for concern because apparently ChatGPT, among other tools, is capable of solving exams admirably. But if you’re in economics school, ChatGPT will have a hard time finding or delivering something that’s expected when you get into college.” economy”, compare.
Parrott added that it will be increasingly important for users to understand the basics of how these systems work, so they understand what biases can exist.