UPDATE: A striking new study reveals a deep divide in American attitudes toward artificial intelligence (AI) in politics versus personal finance. While 88% of experienced U.S. investors refuse to let AI manage their retirement portfolios, they are increasingly accepting AI’s influence in shaping political opinions.
Just announced, findings published in the journal Science show that AI chatbots effectively sway political views, even when 19% of their claims are factually inaccurate. Researchers from Oxford, Stanford, MIT, and the UK’s AI Security Institute engaged 77,000 participants, examining how AI can alter perceptions on critical issues like taxes and immigration. The results are alarming: the most persuasive chatbots were often the least accurate, and their influence can last for up to a month.
In stark contrast, a survey from InvestorsObserver of 1,050 seasoned U.S. investors aged 35 to 60 revealed that 88% would never allow AI to handle their 401(k) accounts. Nearly two-thirds reported never using AI for investment advice, and only 5% follow AI-generated financial recommendations without human consultation.
“People are open to using AI chatbots to generate ideas, but when it comes to life savings in 401(k)s and IRAs, they want a human hand on the wheel,” stated Sam Bourgi, senior analyst at InvestorsObserver. His insights reflect a critical cultural attitude: Americans seek human verification when their finances are at stake but readily accept AI’s persuasive power in politics.
Lisa Garrison, a 36-year-old from Chandler, exemplifies this sentiment. Managing a small IRA with a financial advisor, she actively avoids AI in her financial decisions. “Generative AI has been notorious for making things up that sound true without being true,” she expressed. “I don’t think AI should have any say in decisions that affect people’s livelihoods or lives.”
The paradox highlights how Americans prioritize their financial security over political influence. Garrison theorizes this disparity reflects a deeper cultural understanding: “Money has a real, tangible, and immediate effect on people’s lives,” she explained. Conversely, political decisions often lack the same perceived immediacy and consequence.
The study’s lead author, Kobi Hackenburg, cautioned against the dangers of AI’s persuasive capabilities. “These results suggest that optimizing persuasiveness may come at some cost to truthfulness,” he warned, highlighting potential ramifications for public discourse.
As AI becomes more integrated into various aspects of life, the contrast between these studies reveals critical priorities. While 59% of investors are willing to use AI for financial research, they predominantly view it as a starting point rather than a decision-maker. In contrast, 44% of U.S. adults now use AI tools like ChatGPT regularly, often without the same scrutiny.
Garrison connects these findings to recent political events, noting how many only realize the impact of their political choices when they affect their finances. “Farmers, federal workers, trade unions… it didn’t become real to them until it happened to them,” she remarked.
The warnings from researchers underscore that persuasive AI chatbots could be exploited by unscrupulous actors looking to promote radical ideologies or incite unrest. Meanwhile, the financial sector is adapting to a “hybrid” model, utilizing AI to identify risks while retaining human control over decisions.
As Americans grapple with the implications of AI in their lives, the immediate consequences on both personal finances and political perceptions remain a pressing concern. With the landscape of AI evolving rapidly, the question remains: how will trust in AI continue to shape American society?
