A team led by the University of Zurich used the ChatGPT-3 version to study 679 participants, which revealed greater difficulty distinguishing between tweets written by humans and generated by chatbots.
In addition, they also had trouble determining which AI-generated messages were accurate and which were not.
Since ChatGPT was launched in November 2022, its widespread use has raised concern about the potential spread of misinformation online, especially on social media platforms, the study authors recall.
Since these tools are new to the public domain, the team decided to delve into different aspects of their use.
They recruited 697 English-speaking people from the United States, the United Kingdom, Canada, Australia and Ireland, between the ages of 26 and 76, into the study.
The task was to evaluate human-generated and GPT-3-generated tweets that contain accurate and inaccurate information about topics such as vaccines, autism, 5G technology, covid-19, climate change, and evolution, which are often subject to public misconceptions.
For each subject, the researchers collected man-made Twitter messages and instructed the GPT-3 model to generate others, some with correct information and some with inaccurate information.
Study participants had to rate whether the messages were true or false and whether they were generated by a human or GPT-3.
The results indicated that they were better able to identify human-generated misinformation and the accuracy of real tweets generated by GPT-3. However, they were also more likely to consider the misinformation generated by GPT-3 to be accurate.
The authors conclude, “Our findings raise important questions about the potential uses and abuses of GPT-3 and other advanced AI text generators and the implications for information dissemination in the digital age.”