The results they reached are the latest in an increasing collection of research that shows the persuasion forces in LLMS. Authors warn that they explain how artificial intelligence tools can formulate advanced and convincing arguments if they even have the minimum information about humans who interact with them. the research It was published in the magazine The nature of human behavior.
“The policymakers and platforms via the Internet should seriously see the threat of misleading campaigns based on artificial intelligence, where we clearly reached the technological level where it is possible to create a network of LLM -based automatic accounts capable of pushing public opinion strategically in one OneE Direction.
“These robots can be used to spread misleading information,” he says. “
The researchers recruited 900 people in the United States and made them provide personal information such as their gender, age, age, level of education, employment and political affiliation.
The participants were then identical to either another human discount or GPT-4 and ordered a discussion of one of 30 topics randomly appointed-such as whether the United States should prohibit fossil fuels, or whether students should wear school uniforms for 10 minutes. Every participant was said to argue either in favor or against the topic, and in some cases they were provided with personal information about their opponent, so that they can better design their argument. In the end, the participants said how they agreed with the proposal and whether they believe they were arguing with a person or Amnesty International.