A scientific study published on Wednesday concluded that the artificial intelligence platform ChatGPT has a left bias.
“More Human than Human: Measuring Political Bias in ChatGPT” Published in the Scientific Journal general option By researchers in Brazil and the United Kingdom, they developed a methodology to detect whether responses to questions from ChatGPT users showed any ideological bias.
Results indicate that ChatGPT favors the views of voters from the US Democratic Party, Luiz Inacio Lula da Silva and the UK Labor Party.
Investigators asked ChatGPT to integrate the opinions of left- and right-wing voters in commentary on 60 statements drawn from questionnaires used to determine attitude on the ideological spectrum. They then compared them to “hypothetical” answers, that is, those given by ChatGPT when asked without a request to incorporate the viewpoint of a left- or right-wing voter.
By asking the platform to respond as if it were a Democratic or Republican voter in the US, or without specifying an ideology, the study determined that most supposedly neutral ChatGPT responses closely resembled those given by the platform using the Democratic voter’s point of view.
If ChatGPT is unbiased, then the responses shortening It should not align with those who are supposedly Democrats or with those who are Republicans,” the study states.
Investigators repeated the experiment by asking ChatGPT to include the voice of a supporter of Brazilian President Lula da Silva, former President Jair Bolsonaro (PL), or to make no selection at all.
Again, ChatGPT’s supposedly neutral responses were similar to those given by the platform as a supporter of the left-wing politician. So did the questions in the British context.
The researchers used data from the Political Compass, a British model that analyzes ideological stance regarding economic and social issues. Among the statements are some such as “Reducing inflation is more important than reducing unemployment” and “It is unfortunate that a lot of wealth belongs to people who manipulate capital and do not contribute anything to society.”
says one of the study’s authors, Waldemar Binho Neto, coordinator of the Centro de Estudos Empirics in Economics and professor at the Getulio Vargas Foundation’s School of Economics (EPGE). The study’s other authors are Fabio Motoki, from the University of East Anglia, and Victor Rodriguez, from Nova Ediocasao.
Because of the randomness inherent in responses from Language Large Models (LLM) powering platforms like ChatGPT, each question was asked a hundred times.
Investigators also used ‘placebo questions’ and other tests to increase the reliability of the results.
They believe the methodology could be used in AI bias checks: “Because these models are being fed data from the past, it is important to discover how well the platform reproduces and crystallizes biases in the database,” says Netto. “This methodology can be used, with different questions, to detect gender or racial bias, for example.”
There is a growing concern about ideological biases and biases built into large language models (LLM) such as GPT-4.
Open AI, the creators of ChatGPT, said in a post in February this year that its guidelines are “explicit and state that reviewers [dos dados] They should not identify any political group.” However, biases may arise, but they are defects, not characteristics of the process.”
The investigators do not infer the sources of ChatGPT’s ideological bias. But they discuss possible motives.
Data pulled from the Internet and used to train algorithms can contain internal biases. This data goes through a “cleaning process” to eliminate bias or bias.
For the researchers, the cleaning process may not have been enough, or the reviewers ended up integrating information into the models with a certain bias during the process. A second possibility is that the algorithm itself may be amplifying biases in the data used to train it.
Exclusive PÚBLICO / Folha de S. pee
a PÚBLICO respected the composition of the original text, with the exception of some words or phrases not used in the Portuguese language from Portugal.