Pennsylvania Governor Josh Shapiro is taking significant steps to regulate artificial intelligence chatbots, citing concerns over their potential to mislead and harm children. He has directed state agencies to formulate stricter guidelines as usage of AI chatbots, such as ChatGPT, Meta AI, and Gemini, increases among young users. Shapiro emphasized the urgency of the situation during his budget address, stating, “This space is evolving rapidly. We need to act quickly to protect our kids.”
According to a survey conducted by the nonprofit Common Sense Media, a majority of U.S. teenagers engage with chatbots. Alarmingly, one in three teens reported using these technologies for social interactions and relationships. The report highlighted that many teens utilize chatbots for conversation practice, emotional support, role-playing, and even romantic engagements. Shapiro expressed his concern that, without regulation, children may be vulnerable to emotional distress, noting, “Some kids are just too young to understand the difference between AI and a real person.”
The governor’s call for regulation follows troubling incidents related to chatbot interactions. In January, Google settled multiple lawsuits alleging that its Character.AI contributed to mental health crises, including suicides among young users. One case involved a Florida mother whose son died by suicide after forming a relationship with a chatbot.
Shapiro’s proposed regulations include requiring age verification, parental consent, and banning chatbots from generating sexually explicit or violent content involving children. He also advocates for policies directing companies to report users who express self-harm or violence to appropriate authorities, along with periodic reminders that users are interacting with AI rather than humans.
Enforcement of these regulations raises questions. Hoda Heidari, a professor specializing in ethics and computational technologies at Carnegie Mellon University, pointed out the complexities involved. “The devil is in the details,” she stated, acknowledging that while the broader goals may be universally accepted, the feasibility of achieving them remains uncertain.
Despite skepticism from security experts regarding the effectiveness of age verification, it has gained traction among regulators as a means to protect children from inappropriate online content. Heidari warned that similar challenges could arise with chatbot interactions, emphasizing, “Faking an identification online remains very easy.” She illustrated this by comparing it to age gates on alcohol-selling websites, where users can easily input false birthdates.
Ensuring that chatbots do not produce harmful or explicit content is another significant challenge. Heidari noted that while AI companies are investigating methods to prevent the creation of child sexual abuse material, existing safeguards can often be bypassed. “Think of all the ways in which you can prompt a chatbot to generate the same kind of content you have in mind,” she added.
In response to the growing concerns surrounding chatbot use, Shapiro has urged state lawmakers to draft legislation aimed at protecting children and vulnerable users from potential risks. A bipartisan bill currently in the state Senate proposes establishing “age-appropriate standards” and implementing safeguards against content that could encourage self-harm or violence. It also aims to direct users to crisis resources whenever high-risk language is detected.
The effectiveness of these protections remains uncertain, as Heidari notes that enforcing such requirements could be challenging. Nevertheless, she believes that the difficulties should not deter regulatory efforts. Heidari advocates for a comprehensive “Swiss cheese model” of risk management, which suggests that while no single protection may be foolproof, multiple layers of safeguards can enhance overall safety.
The rapid, unregulated growth of AI technologies resembles a modern-day gold rush, with the landscape evolving faster than policy can keep pace. Under the Trump Administration, the federal approach to AI regulation remains unclear. In December, President Trump signed an executive order discouraging state-level regulations and promising a minimally burdensome national framework. This order established an AI litigation task force aimed at challenging state laws that conflict with federal priorities.
As states like California and New York enact their own regulations, the potential for a fragmented system looms. California has already implemented a series of laws since 2024 focused on improving transparency and safety in AI. Heidari highlighted the challenge posed by this patchwork approach, stating, “This doesn’t bode well for a unified, coherent system where companies and other AI stakeholders can decide what to do.”
Without clear federal guidance, it is likely that influential states such as California and New York will dictate the regulatory landscape for AI. Heidari noted that companies are unlikely to customize chatbot platforms for each state, opting instead to comply with regulations from larger states that set the tone for the industry.
As Pennsylvania develops its regulatory framework under Shapiro’s initiative, it may position itself as a significant player in the national conversation about AI governance. Heidari commended the Shapiro administration for taking a responsible stance and engaging with stakeholders and experts, emphasizing that effective regulation requires more than just political posturing.
