Tech Accelerator

A state-by-state guide to AI laws in the U.S.

In lieu of federal regulation, U.S. states are proposing and enacting AI laws of their own. This up-to-date breakdown by state can help the C-suite keep tabs on developments.

U.S. states are advancing laws governing artificial intelligence use, given the lack of federal guidance. In doing so, states are creating a regulatory patchwork that could make compliance complex for businesses.

AI bills have been introduced in Congress, but no comprehensive AI regulations have passed. That leaves the other branches of government to address the growing issue, including the White House and judicial system. Federal agencies have spent the last year enacting President Joe Biden's executive order on AI by developing safety standards for federal AI use. Meanwhile, some AI developers are facing copyright lawsuits, while others are reaching agreements with entities such as news organizations to train AI models on their content.

On the global stage, the European Union this year implemented its AI Act. The law regulates AI by categorizing systems into levels of risk and setting requirements for the different categories. The law will be enforced by EU member states.

TechTarget Editorial will be tracking comprehensive AI laws, or laws specifically targeting AI, as they're enacted in U.S. states, at the federal level and globally.

Northeast U.S.

New Hampshire

  • HB 1688: Prohibits state agencies from using AI to surveil or manipulate members of the public.
    Effective date: July 1, 2024.

Southeast U.S.

Tennessee

Midwest U.S.

Illinois

  • HB 3773: Amends the Illinois Human Rights Act to regulate the use of AI by employers. It prohibits employers from using AI that could potentially subject employees to unlawful discrimination based on what the Illinois Human Rights Act defines as protected classes. It also stops employers from using zip codes in the recruitment and hiring process.
    Effective date: Jan. 1, 2026.

Southwest U.S.

Utah

  • S.B. 149 Artificial Intelligence Policy Act: Creates liability for AI use not properly disclosed that ultimately violates consumer protection laws. It requires users in regulated occupations, such as healthcare, to disclose when consumers are interacting with generative AI. It also establishes the Office of Artificial Intelligence Policy and a regulatory AI analysis program.
    Effective date: May 1, 2024.

West U.S.

California

  • SB-942 California AI Transparency Act: Requires companies developing generative AI systems to provide AI detection tools free of charge and enables users to identify and mark that content has been AI-generated. It also allows developers to revoke third-party licenses should those users modify the generative AI system to no longer include the AI disclosures. The bill would implement penalties of $5,000 per violation.
    Effective date: Jan. 1, 2026.
  • AB 2013: Requires large AI system developers to publicly disclose a high-level summary of the data used to train generative AI.
    Effective date: Jan. 1, 2026.

Colorado

  • SB24-205: Requires developers of high-risk AI systems to use what it describes as "reasonable care" to protect consumers from algorithmic discrimination. Some of the law's requirements include disclosing information about the system to deployers and making publicly available statements summarizing the high-risk systems.
    Effective date: Feb. 1, 2026.

Around the world

  • China AI regulations: China became one of the first countries to regulate AI. It has implemented regulations that require businesses to be transparent about their AI algorithm use and provide explainable algorithms. It also prohibits algorithms from offering different prices to different users based on data collected and assessed by the AI algorithm.
    Effective date: March 1, 2022.
  • EU Artificial Intelligence Act: Categorizes AI systems based on four levels of risk. It prohibits the use of systems in the unacceptable risk category, which includes AI uses such as social scoring systems or compiling facial recognition databases through untargeted internet scraping. High-risk systems, such as those that make decisions affecting individuals -- for example, credit, housing and employment decisions -- face the most oversight. Limited-risk AI systems, such as chatbots, face lighter transparency requirements. Lastly, minimal risk systems, which include AI-enabled video games, are unregulated. The law's requirements will be applied in a phased approach and will apply to businesses gradually over time.
    Effective date: Aug. 1, 2024.

AI regulatory trends

Transparency. In both enacted and proposed AI laws, policymakers want AI developers and deployers to make it clear when a user is interacting with AI, whether that's through technology like chatbots or even AI-generated content.

Terms to know

AI bias: Machine learning bias or AI bias occurs when an algorithm produces results that are prejudiced due to the type of data the algorithm was fed during training.

AI ethics: An artificial intelligence code of ethics is a policy that defines the role of AI and acts as guidance when developers are faced with an ethical decision regarding the use of the technology.

Artificial intelligence governance: AI governance is a legal framework designed to ensure AI technologies are developed and used in ethical and responsible ways.

Black box AI: These systems are not transparent, making it difficult to understand or explain how the AI model arrived at its conclusions. Black box AI models might create problems related to bias -- where incorrect results offend or damage some groups of people -- and with validating accuracy.

Copyright: Copyright describes ownership of control over the rights to the use and distribution of certain creative works, including books, music, videos and visual arts. Generative AI providers, such as OpenAI and Anthropic, have been sued for training their systems on potentially copyright-protected works, such as newspapers, books and works of art.

Deepfake: AI can be used to create deepfake audio or video, which replaces one person with another to create new content where someone -- such as a prominent government official -- is represented doing or saying something they didn't do or say.

GDPR: The EU's General Data Protection Regulation (GDPR) aims to protect individuals' privacy by requiring organizations that collect data personal data to do so in a responsible manner. ;The GDPR applies to AI when personal data is used as part of model training or deployment.

Responsible AI: This approach to developing AI systems with ethical and legal considerations helps to deploy AI in a safe and trustworthy way.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Bridget Botelho, TechTarget's editorial director of news, contributed to this report.

Dig Deeper on AI business strategies