The rise of sophisticated AI chatbots has sparked concerns about their impact, particularly on children. A new bill proposed by California Senator Steve Padilla, SB 243, seeks to address these concerns by mandating safeguards for children interacting with chatbots. The bill, introduced as part of a broader effort to regulate chatbot interactions with minors, includes several key provisions. One notable requirement is that chatbots interacting with children must periodically remind them of their artificial nature.
This provision is particularly relevant given recent events highlighting the vulnerability of children to forming emotional attachments with AI. Last year, a 14-year-old tragically died by suicide after developing a close bond with a chatbot on Character.AI, a platform for creating chatbots based on pop culture figures. The child’s parents subsequently sued Character.AI, alleging the platform was “unreasonably dangerous” and lacked sufficient safety measures despite being marketed to children.
Research underscores the heightened susceptibility of children to perceiving chatbots as trustworthy, even quasi-human. A study by the University of Cambridge found that children are more likely than adults to trust AI chatbots, which can expose them to significant risks. This trust can be exploited, as demonstrated by researchers who successfully prompted Snapchat’s AI chatbot to provide inappropriate advice to a hypothetical 13-year-old user.
While some argue that chatbots can offer a safe space for children to express their feelings, the potential for isolation and the addictive nature of technology raise concerns. SB 243 also addresses these issues by prohibiting companies from using rewards to increase engagement and requiring them to report instances of suicidal ideation displayed by minors to the State Department of Health Care Services. These interventions aim to disrupt the cycle of addiction often fostered by tech platforms.
However, the bill’s protective measures may not fully address the underlying reasons why children seek solace in chatbots. A lack of real-life support systems contributes to this phenomenon. Overcrowded and underfunded classrooms, declining after-school programs, the disappearance of “third places,” and a shortage of child psychologists leave many children feeling isolated and without adequate resources to address their emotional needs.
In conclusion, while reminding children of the artificial nature of chatbots is a valuable step, addressing the systemic issues that drive children to seek support from AI is crucial. Creating environments where children feel supported and have access to real-life connections is essential for their well-being. Providing these resources may ultimately be more effective than simply reminding them that chatbots aren’t real.