2024 witnessed significant advancements in AI, marked by rapid technological progress, concerns about potential harms, and speculation about its future control. However, alongside these developments, governments made substantial strides in regulating algorithmic systems. This article provides a comprehensive overview of the most impactful AI legislation and regulatory efforts at the state, federal, and international levels in 2024.
State-Level AI Regulation
U.S. states spearheaded AI regulation in 2024, introducing a plethora of bills. These ranged from establishing study committees to imposing civil liability on AI developers for potential societal harm. While most bills didn’t pass, several states enacted significant legislation, potentially serving as models for others.
With the rise of AI-generated misinformation during the election, bipartisan support emerged for anti-deepfake laws. Over 20 states now prohibit deceptive AI-generated political ads close to elections. Legislation targeting AI-generated pornography, especially involving minors, also gained traction in states like Alabama, California, Indiana, North Carolina, and South Dakota.
California, home to the tech industry, proposed ambitious AI regulations, including holding companies liable for catastrophic damages caused by their AI systems. While Governor Newsom vetoed this bill, he signed others addressing more immediate AI harms. These include requiring health insurers to ensure fairness in AI-driven coverage decisions, mandating AI-generated content labeling, and regulating the use of AI-generated likenesses of deceased and living individuals.
Colorado enacted a pioneering law requiring companies to mitigate discriminatory AI practices, establishing a potential national benchmark. Utah, interestingly, prohibited granting legal personhood to AI and non-human entities.
Federal AI Initiatives
While Congress extensively discussed AI and released a comprehensive report on future regulation, limited federal legislation materialized. However, federal agencies actively pursued goals outlined in President Biden’s 2023 AI executive order. Regulators, particularly the Federal Trade Commission (FTC) and Department of Justice (DOJ), targeted misleading and harmful AI systems.
Agencies focused on hiring AI talent, developing responsible AI standards, and increasing transparency in government AI usage. The Office of Management and Budget spearheaded efforts to disclose information about federal AI systems potentially impacting public rights and safety.
The FTC’s “Operation AI Comply” addressed deceptive AI practices, like fake reviews and unauthorized legal advice. It sanctioned Evolv for misleading claims about its AI gun detection technology, settled an investigation with IntelliVision regarding biased facial recognition, and banned Rite Aid from using facial recognition due to discriminatory practices. The DOJ pursued legal action against RealPage for alleged algorithmic price-fixing and secured antitrust victories against Google, potentially reshaping the AI search landscape.
Global AI Regulations
The European Union’s AI Act came into effect, establishing a model for other regions. It mandates risk mitigation and standards for high-risk AI systems, such as those used in hiring or medical decisions. It also bans specific AI applications, like social scoring systems.
China introduced an AI safety governance framework, providing non-binding standards for risk mitigation.
Brazil’s senate passed a comprehensive AI safety bill, potentially offering significant protections for copyrighted material used in AI training. Developers would be required to disclose training data sources, and creators could prohibit or negotiate compensation for their work’s use. Similar to the EU’s AI Act, the bill mandates safety protocols for high-risk AI systems.
Conclusion
2024 marked a turning point in AI regulation, with governments actively addressing the challenges and opportunities presented by this rapidly evolving technology. From state-level initiatives to federal actions and international frameworks, the year saw concrete steps towards establishing guidelines for responsible AI development and deployment. These developments lay the groundwork for future regulations, emphasizing ethical considerations, transparency, and accountability in the use of AI systems.