On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders. Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025. Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.” Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.
Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May. There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session. In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.
Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:
Lower Thresholds for “High-Risk AI.” Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act. First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act. Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.
New Requirements for Distributors and Other Entities. TRAIGA would build upon the Colorado AI Act’s approach to regulating key actors in the AI supply chain. It would also add a new role for AI “distributors,” defined as persons, other than developers, that make an AI system “available in the market.” Distributors would have a duty to use reasonable care to prevent algorithmic discrimination, including a duty to withdraw, disable, or recall non-compliant high-risk AI systems, as appropriate.
Ban on “Unacceptable Risk” AI Systems. Similar to the EU AI Act, TRAIGA would prohibit the development or deployment of certain AI systems that pose unacceptable risks, including AI systems used to manipulate human behavior, engage in social scoring, capture biometric identifiers of an individual, infer or interpret sensitive personal attributes, infer (or that have the capability to infer) emotions without consent, or produce deepfakes that constitute CSAM or intimate imagery prohibited under Texas law.
New Generative AI Training Data Record-Keeping Requirement. TRAIGA contains requirements specific to developers of generative AI systems, who would be required to keep “detailed records” of generative AI training datasets, following suggested actions in NIST’s AI Risk Management Framework Generative AI Profile, previously covered here.
Expanded Reporting for Deployers; No Reporting for Developers. TRAIGA would impose reporting requirements for AI system deployers—defined as persons that “put into effect or commercialize” high-risk AI systems—that go beyond those in the Colorado AI Act. TRAIGA would require deployers to provide written notice to the Texas AG, relevant regulatory authorities, or TRAIGA’s newly-established AI Council, as well as “affected consumers,” where the deployer becomes aware or is made aware that a deployed high-risk AI system has caused or is likely to result in algorithmic discrimination or any “inappropriate or discriminatory consequential decision.” Unlike the Colorado AI Act, however, TRAIGA would not impose reporting requirements for developers.
Exemptions. TRAIGA would recognize exemptions for (1) research, training, testing, and other pre-deployment activities within the scope of its sandbox program (unless such activities constitute prohibited uses), (2) small business, as defined by the U.S. Small Business Administration and certain other requirements, and (3) developers of open-source AI systems so long as the developer takes steps to prevent high-risk uses and makes the “weights and technical architecture” of the AI system publicly available.
Enforcement. TRAIGA would authorize the Texas AG to enforce its developer, deployer, and distributor high-risk AI requirements and recover injunctive relief and civil penalties, subject to a 30-day cure period. Additionally, TRAIGA would provide a limited private right of action for injunctive and declaratory relief against entities that develop or deploy AI for prohibited uses.
* * *
TRAIGA’s prospect for passage is far from certain. As in other states, including Colorado, the draft text may be substantially amended through the legislative process. Nonetheless, if enacted, TRAIGA would firmly establish a risk-based, consumer protection-focused framework as a national model for AI regulation in the United States. We will be closely monitoring TRAIGA and other state AI developments as the 2025 state legislative sessions unfold.
Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.