On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.

Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:

Lower Thresholds for “High-Risk AI.”  Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act.  First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act.  Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.

New Requirements for Distributors and Other Entities.  TRAIGA would build upon the Colorado AI Act’s approach to regulating key actors in the AI supply chain.  It would also add a new role for AI “distributors,” defined as persons, other than developers, that make an AI system “available in the market.”  Distributors would have a duty to use reasonable care to prevent algorithmic discrimination, including a duty to withdraw, disable, or recall non-compliant high-risk AI systems, as appropriate. 

Ban on “Unacceptable Risk” AI Systems.  Similar to the EU AI Act, TRAIGA would prohibit the development or deployment of certain AI systems that pose unacceptable risks, including AI systems used to manipulate human behavior, engage in social scoring, capture biometric identifiers of an individual, infer or interpret sensitive personal attributes, infer (or that have the capability to infer) emotions without consent, or produce deepfakes that constitute CSAM or intimate imagery prohibited under Texas law. 

New Generative AI Training Data Record-Keeping Requirement.  TRAIGA contains requirements specific to developers of generative AI systems, who would be required to keep “detailed records” of generative AI training datasets, following suggested actions in NIST’s AI Risk Management Framework Generative AI Profile, previously covered here.

Expanded Reporting for Deployers; No Reporting for Developers.  TRAIGA would impose reporting requirements for AI system deployers—defined as persons that “put into effect or commercialize” high-risk AI systems—that go beyond those in the Colorado AI Act.  TRAIGA would require deployers to provide written notice to the Texas AG, relevant regulatory authorities, or TRAIGA’s newly-established AI Council, as well as “affected consumers,” where the deployer becomes aware or is made aware that a deployed high-risk AI system has caused or is likely to result in algorithmic discrimination or any “inappropriate or discriminatory consequential decision.”  Unlike the Colorado AI Act, however, TRAIGA would not impose reporting requirements for developers.

Exemptions.  TRAIGA would recognize exemptions for (1) research, training, testing, and other pre-deployment activities within the scope of its sandbox program (unless such activities constitute prohibited uses), (2) small business, as defined by the U.S. Small Business Administration and certain other requirements, and (3) developers of open-source AI systems so long as the developer takes steps to prevent high-risk uses and makes the “weights and technical architecture” of the AI system publicly available. 

Enforcement.  TRAIGA would authorize the Texas AG to enforce its developer, deployer, and distributor high-risk AI requirements and recover injunctive relief and civil penalties, subject to a 30-day cure period.  Additionally, TRAIGA would provide a limited private right of action for injunctive and declaratory relief against entities that develop or deploy AI for prohibited uses.  

*              *              *

TRAIGA’s prospect for passage is far from certain.  As in other states, including Colorado, the draft text may be substantially amended through the legislative process.  Nonetheless, if enacted, TRAIGA would firmly establish a risk-based, consumer protection-focused framework as a national model for AI regulation in the United States.  We will be closely monitoring TRAIGA and other state AI developments as the 2025 state legislative sessions unfold.  

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.