The push to automate government services, championed by Elon Musk’s DOGE initiative, is facing significant challenges. Recent attempts to modernize the Social Security Administration (SSA) using AI-powered tools have highlighted the potential pitfalls of prioritizing automation over human expertise and oversight.
Wired recently reported on the SSA’s implementation of the “Agency Support Companion,” a ChatGPT-style chatbot intended to streamline employee workflows. However, initial feedback suggests the tool is ineffective and poorly designed. SSA employees have criticized the chatbot’s vague and inaccurate responses, as well as a clumsy training video that failed to address critical data privacy concerns. The agency was forced to issue an apology for the video’s omission of instructions regarding sensitive personal information. This incident raises serious questions about the effectiveness and preparedness of such automation efforts.
The SSA’s experience echoes similar struggles in other countries. In Brazil, the government’s “Meu INSS” app, designed to automate social security claims, has been plagued by issues. The app, which utilizes computer vision and natural language processing, frequently rejects legitimate claims due to minor errors. This automated decision-making process often leads to protracted legal battles, creating further hardship for applicants. The case of Josélia de Brito, a former sugarcane worker whose retirement benefits were denied due to an algorithmic misidentification, underscores the potential for AI-driven systems to exacerbate existing inequalities.
These examples serve as a cautionary tale for the U.S., where DOGE is pursuing an “AI-first” strategy across federal agencies, including the SSA. This approach aims to drastically reduce the federal workforce and replace human workers with software. However, early results have been marked by dysfunction and chaos. One notable incident involved DOGE workers mistakenly marking numerous living Social Security recipients as deceased, leading to benefit disruptions and a complex reinstatement process.
Further concerns arise from DOGE’s ambitious plan to rewrite the entire SSA codebase in a matter of months. Experts suggest that such a rapid overhaul would necessitate heavy reliance on AI coding tools, which are prone to errors and require careful supervision. Given DOGE’s track record of mistakes, an AI-driven code rewrite poses significant risks to the stability and functionality of the SSA’s systems.
Beyond the practical challenges, critics argue that DOGE’s true objective is not to modernize the SSA, but rather to destabilize and ultimately privatize the agency. Whether intentional or not, the current automation efforts appear to be contributing to this outcome.
The SSA’s chatbot debacle and Brazil’s experience with automated social security claims underscore the crucial need for careful consideration and rigorous testing before deploying AI-powered solutions in critical social services. Prioritizing efficiency and automation without adequate safeguards risks undermining the very systems intended to support vulnerable populations. The human element remains essential in ensuring fairness, accuracy, and accountability in the delivery of social services.
The potential consequences of unchecked automation in social services warrant serious attention. The experiences of the SSA and Brazil serve as a warning against prioritizing technological solutions without fully understanding their potential impact on individuals and society as a whole.