A two-hour verbal exchange with a synthetic intelligence (AI) style is all it takes to reproduce a person’s personality exactly, the researchers found.
In a new exam published on November 15 in the Arxiv Preprinted database, Google researchers and Stanford University created “simulation agents”, necessarily replicas of AI, of 1,052 Americans founded on two -hour interviews with participants. These interviews were used to exercise a generative style designed to imitate human behavior.
To assess the accuracy of the AI replications, each player completed two rounds of personality tests, social surveys, and logic games, and was asked to repeat the procedure two weeks later. When the AI replicas were subjected to the same tests, they matched the responses of their human counterparts with 85% accuracy.
The paper proposed that AI models that mimic human habit can be useful in a variety of study scenarios, such as assessing the effectiveness of public fitness policies, understanding responses to product launches, or even modeling reactions to key social occasions that might in a different way be too costly. complicated or ethically complex to examine with human participants.
Related: AI speech generator ‘reaches human parity’ — but it’s too dangerous to release, scientists say
“A general-purpose simulation of human attitudes and behaviors, where the simulated user can interact with a variety of social, political, or political data, can allow a laboratory of researchers to verify a wide diversity of interventions and theories,” the researchers wrote. in the article. . Simulations could also help test new public interventions, expand theories about causal and contextual interactions, and improve our understanding of how institutions and networks influence people, they added.
To create the simulation agents, the researchers conducted in-depth interviews that covered participants’ life stories, values and opinions on societal issues. This enabled the AI to capture nuances that typical surveys or demographic data might miss, the researchers explained. Most importantly, the structure of these interviews gave researchers the freedom to highlight what they found most important to them personally.
Scientists have used those interviews to generate personalized AI models capable of predicting how Americans might answer survey questions, social reports, and behavioral games. This included responses to the General Social Survey, a well-established tool for measuring social attitudes and behavior; the stock of the five wonderful personalities; And economic games, such as the dictator game and the trust game.
Although the AI agents closely mirrored their human counterparts in many areas, their accuracy varied across tasks. They performed particularly well in replicating responses to personality surveys and determining social attitudes but were less accurate in predicting behaviors in interactive games involving economic decision-making. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuance.
—Meet ‘Chameleon’: an AI style that lets you take advantage of facial popularity with a complicated virtual mask
Language models can be squeezed into your phone, which need thousands of servers running, after breakthrough, ‘it’s a marriage of AI and Quantum’: new generation gives AI the power to sense surfaces for the first time
They also identified the generation of generation. The technologies of AI and Deepfake are already being used through malicious actors to deceive, supplant, abuse and manipulate others online. Simulation agents can also be misused, investigators said.
However, they said the generation can allow us to examine facets of human habit in tactics that were not practical, offering a highly controlled testing environment without the demanding ethical, logistical or interpersonal situations of running with humans.
In A To MIT Technology Review, Major Examine writer Joon Sung Park, a PC PhD student at Stanford, said, “If you can have a bunch of little ‘you’ running around and really make the decisions that you would have made — I think the future nonetheless.
Owen Hughes is a freelance editor and publisher specializing in virtual knowledge and technologies. Previously editor-in-chief of ZDNet, Owen has been writing about generation for more than a decade, covering everything from AI, cybersecurity and supercomputers to languages and the public sector. Owen is interested in the intersection of generation, life and paintings: In his past roles at ZDNET and TechRepublic, he has written extensively on corporate leadership, virtual transformation, and the conversion dynamics of remote paintings.
Chinese researchers have just created an open rival to ChatGPT in 2 months. Silicon Valley is panicking.
AI can now reproduce itself: a milestone that has terrified experts
Science News this week: pee interstellar and contagious
Live Science is part of the future US Inc, a foreign media organization and a leading virtual publisher. Visit our corporate site.