Dark Mode Light Mode
Privacy-Focused Search Engines: Alternatives to Google
Federal Agencies Lack Transparency on High-Risk AI Systems

Federal Agencies Lack Transparency on High-Risk AI Systems

Federal Agencies Lack Transparency on High-Risk AI Systems Federal Agencies Lack Transparency on High-Risk AI Systems

Federal agencies are increasingly relying on proprietary AI algorithms for critical tasks impacting public safety and civil rights, often without full understanding of how these systems function or the data used to train them. This concerning trend is revealed in newly released 2024 AI inventory reports.

Agencies like Customs and Border Protection (CBP) and the Transportation Security Administration (TSA) reportedly lack documentation on the data quality used to develop algorithms that screen travelers. Similarly, the Veterans Health Administration (VHA) is acquiring a predictive algorithm for chronic diseases, yet admits uncertainty about the source and acquisition of the veteran medical data used for training. Alarmingly, for over 100 algorithms affecting safety and rights, agencies lack access to the source code explaining their operation.

See also  Meta's AI-Generated Influencers: A Digital Zombie Apocalypse?

This reliance on private companies for high-risk AI raises concerns about transparency and accountability. Varoon Mathur, former senior AI advisor to the White House, expressed worry about proprietary systems potentially diminishing agencies’ control over services and risk management. He emphasized the need for collaboration with vendors while acknowledging the challenges in ensuring responsible AI practices when agencies lack insight into the systems’ inner workings.

Past instances of biased federal algorithms, such as the IRS’s racially biased audit selection model and the VA’s suicide prevention algorithm favoring white men, underscore the potential for harm. The 2024 inventories provide unprecedented insight into federal AI usage, including access to documentation, source code, and risk assessments.

Of the 1,757 reported AI systems, 227 are deemed high-risk, impacting civil rights or safety. Over half of these are commercially developed. Disturbingly, for at least 25 high-risk systems, agencies reported no documentation on the training and evaluation data. Furthermore, access to source code is lacking for at least 105 systems.

See also  Microsoft Copilot Wave 2: Enhanced AI Capabilities Across Microsoft 365

The Biden administration, through the Office of Management and Budget (OMB), mandated thorough evaluations of high-risk AI and stipulated contractual access to essential model information, including training data documentation or source code. These rules, stricter than those faced by vendors in other sectors, have met resistance from government software vendors advocating for case-by-case evaluations.

Industry groups, including the Software & Information Industry Association and the U.S. Chamber of Commerce, have lobbied against stringent requirements, proposing model scorecards—summarized documents lacking technical details—as an alternative. However, experts like Cari Miller, co-founder of the AI Procurement Lab, consider scorecards insufficient for high-risk algorithms, emphasizing the need for deeper understanding of data sources, representativeness, and potential biases.

See also  Alaska Man Arrested After Reporting Airman for CSAM, Possessing AI-Generated CSAM Himself

The federal government, as the largest U.S. buyer, influences procurement standards. Its approach to AI transparency could set a precedent for other sectors. However, President-elect Trump has indicated his intent to reverse the OMB’s AI rules, potentially jeopardizing transparency efforts.

Mathur stressed the importance of maintaining momentum in building trust in federal AI, highlighting the significance of access to code, data, and algorithms for understanding and mitigating potential impacts. The future of responsible AI implementation in government hinges on prioritizing transparency and rigorous oversight.

Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *