As we approach the end of 2024, artificial intelligence continues to transform industries globally, necessitating a regulatory framework that evolves alongside its rapid advancement. The United States is at a pivotal crossroads, having created a comprehensive regulatory environment designed to balance innovation with ethical oversight. However, as AI technologies become increasingly embedded in daily life, the need for adaptive and forward-thinking governance becomes more pressing, setting the stage for significant proposals in 2025.
Looking toward 2025, several major themes are expected to shape AI regulation. Enhanced ethical oversight and transparency will be at the forefront, requiring AI systems to be explainable and understandable. Human-in-the-loop systems will gain prominence, especially in sectors where AI impacts human lives, ensuring that human judgment remains integral to decision-making processes. Data privacy and security will see intensified focus, with stricter standards for data protection and cybersecurity.
Bias mitigation and fairness will be critical, with regulations targeting discrimination prevention in AI outcomes across various sectors. Accountability and liability frameworks will be clarified, assigning responsibilities for AI-driven actions. Environmental impacts of AI will be scrutinized, prompting measures to mitigate the carbon footprint of AI technologies.
United States Federal Regulations and Proposals
The current regulatory landscape is supported by key federal regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the Federal Food, Drug, and Cosmetic Act (FDCA). These laws establish rigorous standards for privacy and safety in healthcare-related AI applications. They are complemented by the Federal Trade Commission Act, which extends consumer protection into the digital arena, ensuring that AI applications follow fair trade practices. Additionally, the 21st Century Cures Act facilitates the integration of AI into healthcare decision-making processes by offering exemptions for clinical decision support software, maintaining necessary safeguards while promoting innovation.
Federal Legislation Proposals
- Better Mental Health Care for Americans Act (S293): Modifies Medicare, Medicaid, and the Children’s Health Insurance Program to include AI’s role in mental health treatment. Requires documentation of AI’s use in nonquantitative treatment limitations and mandates transparency in AI-driven decisions—status: Proposed and under consideration.
- Health Technology Act of 2023 (H.R.206): Proposes allowing AI technologies to qualify as prescribing practitioners if authorized by state law and compliant with federal device standards. Aims to integrate AI into healthcare prescribing practices.—status: Proposed and under consideration.
- Pandemic and All-Hazards Preparedness and Response Act (S2333): Mandates a study on AI’s potential threats to health security, including misuse in contexts such as chemical and biological threats, with a report to Congress on mitigating risks.—status: Proposed and under consideration.
- Algorithmic Accountability Act (AAA): Requires businesses using automated decision systems to report their impact on consumers.—status: Proposed.
- Federal Artificial Intelligence Risk Management Act: Aims to make the NIST AI Risk Management Framework mandatory for government agencies.—status: Proposed.
- TEST AI Act of 2023: Focuses on advancing trustworthy AI tools.—status: Proposed.
- Artificial Intelligence Environmental Impact Act 2024: Measures AI’s environmental impacts.—status: Proposed.
- Stop Spying Bosses Act: Addresses AI use in workplace surveillance.—status: Proposed.
- No Robot Bosses Act: Regulates AI use in employment decisions.—status: Proposed.
- No AI Fraud Act: Protects individual likenesses from AI abuse—status: Proposed.
- Preventing Deep Fake Scams Act: Addresses AI-related fraud in financial services.—status: Proposed.
State-Level Legislation and Proposals
A variety of innovative legislation at the state level addresses diverse regional needs. For instance, California’s AI Transparency Act mandates disclosure and enhances public awareness of AI-generated content. This strengthens the existing California Consumer Privacy Act (CCPA), a landmark legislation enacted in 2018, that provides California residents with enhanced privacy rights and consumer protection concerning the collection and use of their personal data by businesses. Illinois has strengthened its Human Rights Act to prevent AI-driven discrimination in the workplace, while states like Massachusetts and Rhode Island focus on ethical AI integration in mental health and diagnostic imaging services. Colorado has also made strides with legislation like SB24-205, requiring developers of high-risk AI systems to use “reasonable care” to prevent algorithmic discrimination and mandating public disclosures, effective February 1, 2026.
The following legislative efforts underscore the evolving regulatory landscape, aiming to harmonize technological advancement with ethical responsibility, setting the stage for significant regulatory proposals and changes in 2025:
- Northeast U.S.
- New Hampshire (HB 1688): Prohibits state agencies from using AI to surveil or manipulate the public, protecting citizens’ privacy and autonomy. Effective Date: July 1, 2024.
- Massachusetts (An Act Regulating AI in Mental Health Services H1974): Requires mental health professionals to obtain board approval for using AI in treatment, emphasizing patient safety and informed consent.—status: Proposed and pending approval.
- Rhode Island (House Bill 8073): Proposes mandatory coverage for AI technology used in breast tissue diagnostic imaging, with independent physician review.—status: Pending.
- Southeast U.S.
- Tennessee (HB 2091 ELVIS Act): Targets AI-generated deepfakes by prohibiting unauthorized use of AI to mimic a person’s voice, addressing privacy concerns and protecting individuals from identity theft and impersonation. Effective Date: July 1, 2024.
- Virginia (HB2154): Requires healthcare facilities to establish and implement policies on the use of intelligent personal assistants, ensuring responsible integration into patient care and protecting patient confidentiality.—status: In effect since March 18, 2021.
- Georgia (HB887): Prohibits healthcare and insurance decisions based solely on AI, requiring human review of AI-driven decisions to ensure they can be overridden if necessary.—status: Proposed and pending approval.
- Midwest U.S.
- Illinois (HB 3773): Amends the Illinois Human Rights Act to regulate AI use by employers, prohibiting AI applications that could lead to discrimination based on protected classes.
- Safe Patients Limit Act (SB2795): Limits AI’s role in healthcare decision-making, ensuring registered nurses’ clinical judgments are not overridden by AI algorithms, emphasizing human oversight.—status: Reintroduced in 2024 and pending approval.
- Southwest U.S.
- Utah (SB 149): Establishes liability for undisclosed AI use that violates consumer protection laws. Mandates disclosure when consumers interact with generative AI and establishes the Office of Artificial Intelligence Policy to oversee AI applications in regulated sectors like healthcare. Effective Date: May 1, 2024.
- West U.S.
- California:
- SB-942 California AI Transparency Act: Requires developers of generative AI to provide AI detection tools and allows revocation of licenses if disclosures are removed. Effective Date: January 1, 2026.
- AB 2013: Obligates large AI developers to disclose data summaries used for training generative AI, fostering transparency. Effective Date: January 1, 2026.
- Assembly Bill 3030: Requires healthcare facilities using generative AI for patient communication to disclose AI involvement and provide human contact options.
- Senate Bill 1120: Mandates that medical necessity decisions be made by licensed providers and requires AI tools in utilization management to comply with fair standards.
- Senate Bill 896 (SB-896): Directs the California Office of Emergency Services to evaluate the risks of generative AI, coordinating with AI companies to mitigate public safety threats.
- Assembly Bill 1008 (AB-1008): Extends privacy laws to generative AI systems, ensuring compliance with data use restrictions.
- Assembly Bill 2885 (AB-2885): Establishes a legal definition for artificial intelligence in California law.
- Assembly Bill 2876 (AB-2876): Requires AI literacy considerations in education curriculums.
- Senate Bill 1288 (SB-1288): Tasks superintendents with evaluating AI use in education.
- Assembly Bill 2905 (AB-2905): Mandates AI-generated voice disclosures in robocalls.
- Assembly Bill 1831 (AB-1831): Expands child pornography laws to include AI-generated content.
- Senate Bill 926 (SB-926): Criminalizes AI-generated nude image blackmail.
- Senate Bill 981 (SB-981): Requires social media to facilitate reporting of AI-generated deepfake nudes.
- Assembly Bill 2655 (AB-2655): Mandates labeling or removal of election-related AI deepfakes.
- Assembly Bill 2839 (AB-2839): Holds social media users accountable for election-related AI deepfakes.
- Assembly Bill 2355 (AB-2355): Requires political ads created with AI to include clear disclosures.
- Assembly Bill 2602 (AB-2602): Requires studios to obtain consent before creating AI-generated replicas of actors.
- Assembly Bill 1836 (AB-1836): Extends consent requirements to estates of deceased performers for AI-generated replicas.
- California:
- Colorado
- SB24-205: Requires developers of high-risk AI systems to use “reasonable care” to prevent algorithmic discrimination and mandates public disclosures. Effective Date: February 1, 2026.
- Other U.S.
- West Virginia (House Bill 5690): Establishes a task force to recommend AI regulations that protect individual rights and data privacy, with implications for healthcare settings where sensitive patient data is involved.—status: Enacted.
Key Global Regulations
China AI Regulations: Mandates transparency and prohibits discriminatory pricing in AI, requiring clear algorithm explanations. Effective Date: March 1, 2022.
European Union AI Act: Categorizes AI systems by risk, imposes oversight on high-risk applications, and bans unacceptable-risk systems. Effective Date: August 1, 2024.
International alignment and standards will guide the harmonization of national regulations with global AI governance practices. The influence of the European Union’s AI Act and China’s stringent AI policies continues to shape U.S. strategies, underscoring the need for international alignment in AI governance. The World Health Organization (WHO) has issued guidelines for integrating large multi-modal models in healthcare, emphasizing ethical considerations and governance that align with international standards. Additionally, there will be specific attention to AI’s role in employment, workplace surveillance, and healthcare, ensuring ethical use and protecting individual rights. These frameworks underscore transparency, accountability, and fairness, setting benchmarks that U.S. regulations aim to meet or exceed.
Key Themes Shaping the Future of AI Regulation
Enhanced Ethical Oversight and Transparency: As AI systems become more integrated into critical decision-making processes, there will be a stronger emphasis on ethical oversight. This includes requiring transparency in AI algorithms, ensuring that decisions made by AI systems are explainable and understandable to users and regulators alike.
Human-in-the-Loop Systems: There will be increased implementation of human-in-the-loop systems, particularly in sectors where AI decisions can significantly impact human lives, such as healthcare, finance, and criminal justice. This approach ensures that human judgment and ethical considerations are factored into AI-driven decisions.
Data Privacy and Security: Strengthening data privacy and security measures will continue to be a priority. Regulations will likely mandate stricter data protection standards, including minimizing data collection, ensuring data anonymization, and enhancing cybersecurity measures to protect against breaches and misuse.
Bias Mitigation and Fairness: Addressing and mitigating biases in AI systems will remain a central theme. Regulatory frameworks will focus on ensuring fairness in AI outcomes, particularly in areas like employment, lending, and law enforcement, where biased algorithms can lead to discrimination.
Accountability and Liability: As AI systems gain more autonomy, assigning accountability and liability for AI-driven actions becomes crucial. Regulations may define clear responsibilities for developers, operators, and users of AI systems to ensure accountability for outcomes.
Environmental Impact: With growing awareness of environmental sustainability, there may be increased focus on assessing and mitigating the environmental impact of AI technologies. This includes energy consumption and the carbon footprint associated with training and deploying large AI models.6ti[
International Alignment and Standards: As AI is a global phenomenon, there will be efforts to align national regulations with international standards to facilitate cross-border cooperation and ensure consistency in AI governance globally.
AI in Employment and Workplace Surveillance: Regulations may address the use of AI in employment decisions and workplace surveillance to protect workers’ rights and prevent invasive monitoring practices.AI in Healthcare: There will likely be specific guidelines on using AI in healthcare to ensure patient safety, informed consent, and the ethical use of AI in diagnostics and treatment planning.
Strategies to Work Within the Framework of Regulations
To effectively navigate this complex regulatory landscape, organizations should consider:
Establish Clear Governance and Policies: Create governance frameworks and maintain compliance documentation.
Understand Regulatory Requirements: Conduct thorough research and adopt compliance frameworks (e.g., ISO 42001) to manage AI risks.
Incorporate Privacy by Design: Use data minimization, anonymization, and encryption to align with legal standards.
Enhance Security Measures: Implement robust security protocols and continuous monitoring.
Focus on Ethical AI Development: Mitigate biases and ensure transparency and accountability.
Implement Rigorous Testing and Validation: Use regulatory sandboxes and performance audits. A notable innovation in this regard is the use of AI sandboxes, such as the National Institute of Standards and Technology (NIST) AI sandbox initiative, which provides a controlled environment for testing AI technologies in various sectors.
Engage Stakeholders and Experts: Form cross-disciplinary teams and consult stakeholders.
Continuous Education and Adaptation: Keep teams updated on regulatory changes.
Conclusion
As the regulatory landscape evolves, 2025 promises to be a transformative year, with proposals that seek to refine and enhance AI governance. This overview explores the current state of AI regulations in the U.S., the proposals poised to reshape them, and the implications for the future of AI technology as we strive to harmonize innovation with ethical responsibility. An emerging trend among companies is the adoption of comprehensive AI governance frameworks that mirror the European Union’s efforts to protect human rights through fair and ethical AI practices. By embedding “human-in-the-loop” systems, especially in critical decision-making areas involving human lives, organizations not only bolster ethical oversight but also shield themselves from potential liabilities. This integration underscores a commitment to responsible AI development, aligning technological advancements with global standards of transparency and accountability.