You may have heard the news: in its last legislative session, Texas enacted new legislation that staked out an early position in the ever-evolving landscape of artificial intelligence regulation. That new law—the Texas Responsible AI Governance Act (TRAIGA)—was signed earlier this year, takes effect next month on January 1, 2026, and creates a comprehensive regulatory framework for the development and deployment of artificial intelligence that imposes meaningful duties on both government agencies and private entities. With TRAIGA’s effective date rapidly approaching, companies must understand their new compliance obligations.
This article primarily focuses on TRAIGA’s effect on companies, starting with a short summary of TRAIGA’s key obligations and regulations and identifying potential actions you might consider taking before January 1. It then provides a more detailed discussion of the same provisions (including TRAIGA’s key obligations, potential penalties for violations, and regulatory sandbox and safe harbor provisions), and then ends with key takeaways and a look toward the future.
Summary of key points to know about TRAIGA
- TRAIGA establishes new artificial intelligence (AI) governance rules for companies operating in Texas and preempts any local AI regulations.
- TRAIGA’s definition of AI systems is broad—it includes more than just generative AI systems (for example, chatbots)—and therefore will likely apply to a large segment of companies operating in Texas.
- Under TRAIGA, government agencies have a duty to disclose to consumers that they will be interacting with an AI system (but they are not required to obtain affirmative consent from the consumer to continue the interaction).
- TRAIGA also creates a duty for healthcare providers to disclose to patients that AI will be used in relation to the patient’s service or treatment.
- TRAIGA does not create a private cause of action to remedy violations; instead, TRAIGA is enforceable by the Texas Attorney General and can result in civil penalties ranging from US$10,000 to US$200,000 per violation (which can accrue on a continuing, daily basis).
- TRAIGA creates a regulatory sandbox for companies to develop and test AI systems without the need to obtain licenses or other regulatory authorization in exchange for sharing information with the State.
- And separate from its regulatory sandbox, TRAIGA also provides a liability safe harbor for companies that discover potential TRAIGA violations through their own efforts.
Summary of what you should consider doing before January 1, 2026
- Implement and maintain written AI policies that clearly express your company’s intent, purpose and aim for developing or deploying AI systems and guide your company’s use of the same.
- Establish an internal review process that seeks internal and external feedback from employees, developers, users and other stakeholders regarding your company’s development and deployment of AI.
- Create internal processes that will track relevant information from your AI systems (including inputs and outputs), test your AI systems and perform internal reviews on the same.
- Decide whether to participate in TRAIGA’s regulatory sandbox program.
- Identify potential TRAIGA violations.
TRAIGA applies broadly
TRAIGA applies to anyone who promotes, advertises or conducts business in Texas, produces a product or service used by Texas residents, or develops or deploys an artificial intelligence system in Texas. Tex. Bus. & Comm. Code § 551.002(1)-(3). A "deployer" of AI means a person who deploys an artificial intelligence system for use in Texas, while a “developer" means a person who develops an artificial intelligence system that is offered, sold, leased, given or otherwise provided in Texas. Tex. Bus. & Comm. Code § 552.001(1)-(2).
Further, TRAIGA defines AI systems as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions or recommendations, that can influence physical or virtual environments.” Tex. Bus. & Comm. Code § 551.001(1).
This definition covers more than just the generative AI systems and the related AI companies that dominate the news; it also includes predictive and conversational AI, and likely even systems that wouldn’t necessarily be given the tag of “AI” by technology industry experts or the everyday user. At a practical level, this definition likely includes virtual assistants, grammar check functions, recommendation algorithms, autonomous driving, facial recognition systems, business automation and similar systems.
Given the breadth of this definition, TRAIGA’s requirements are therefore applicable to more than just “AI” companies or companies that use generative AI tools. Additionally, given the proliferation of AI systems—particularly narrow AI that is built for specific tasks—even companies that might not have to ensure compliance with TRAIGA on January 1 may be subject to it in the near future.
TRAIGA requires government agencies to disclose when consumers interact with AI systems
The TRAIGA requirement that will be the most noticeable to consumers is also its most general; if a governmental agency has an artificial intelligence system that is intended to interact with consumers, at all, that governmental agency is required to disclose that fact to the consumer “before or at the time of the interaction,” even if it would be obvious to the consumer that they are interacting with an AI system. See Tex. Bus. & Comm. Code § 552.051(b)-(c). Under TRAIGA, this obligation extends to every governmental agency with the few exceptions of hospital districts and institutions of higher education. Tex. Bus. & Comm. Code § 552.001(1)-(2).
Because TRAIGA’s definition of AI is so broad to begin with, and because AI systems are proliferating, AI use cases are multiplying, and the cost of AI continues to decline, in the near future this duty will likely apply to almost every interaction between a governmental agency and a consumer, and therefore almost every governmental agency will need to draft and implement an AI disclosure for its consumers. These disclosures must be clear, conspicuous, written in plain English and free of “dark patterns” that could undermine user autonomy. Tex. Bus. & Comm. Code § 552.051(d).
A “dark pattern” is a user interface designed or manipulated with the effect of substantially subverting or impairing user autonomy, decision making or choice, and includes any practice the Federal Trade Commission refers to as a dark pattern. Tex. Bus. & Comm. Code § 541.001(10). Put another way, a dark pattern is a user interface designed to manipulate or mislead a user into doing something they might not otherwise have done.
A governmental agency must draft and implement their AI disclosure in a way that does not do that; the disclosure should not attempt to coax or cajole the consumer into continuing to interact with the AI system after the disclosure when the consumer might otherwise choose not to, and it should not obscure or hide that an interaction with an AI system is about to occur or what the interaction is. However, TRAIGA does not require obtaining the consumer’s consent to interact with the AI system; the governmental agency must simply make a clear, conspicuous disclosure that an interaction is with an AI system.
TRAIGA also requires companies providing healthcare services to make a disclosure to patients
Along with its governmental agency AI disclosure requirement, TRAIGA also imposes a slightly different disclosure requirement on providers of healthcare services (services defined as “related to human health or to the diagnosis, prevention or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services”). While TRAIGA’s current general disclosure requirement is directed at government agencies, this second disclosure requirement will affect private entities that fall under the statute’s definition of providers of healthcare services. Providers meeting that definition are required to disclose to a patient (or their representative) that an AI system is being used “in relation to the healthcare service or treatment.” Tex. Bus. & Comm. Code § 552.051(a)-(f).
Relevantly, TRAIGA requires providers to disclose when an AI system “is” actually used in relation to a patient’s treatment. It may therefore be insufficient for a provider to simply make a disclosure to every patient at the beginning of every service that an AI system “may” be used in relation to their future treatment. But while that may impose a higher burden on providers, the statute’s legislative history indicates that it may be sufficient for the provider’s “disclosure to be provided . . . as part of any waivers or forms signed by a patient at the start of the service.” See Tex. B. An., H.B. 149 (2025).
TRAIGA does, however, allow providers additional time to make this disclosure; unlike the governmental agency disclosure, which must be made “before or at the time of interaction,” providers of healthcare services are only required to provide the disclosure on “the date” the treatment is provided, which allows the possibility for the disclosure to be made shortly after treatment is provided as well. Further, TRAIGA also carves out emergencies from this requirement, allowing providers to instead make the disclosure “as soon as reasonably possible” in case of emergency treatment.
As above, with the proliferation of AI—even and especially in backend systems and other systems “used in relation to treatment” (for example, to identify potential abnormalities in imaging)—most, if not all, healthcare providers will eventually need to make a TRAIGA disclosure to patients.
TRAIGA prohibits discrimination by AI systems
TRAIGA also prohibits companies that (1) promote, advertise or conduct business in Texas, (2) serve Texas residents and (3) AI developers/deployers in Texas from intentionally developing or deploying an AI system that unlawfully discriminates against a protected class or infringes on any right guaranteed under the United States Constitution, Texas Constitution, or state or federal law. Tex. Bus. & Comm. Code §§ 552.002; 552.056(b).
Crucially, liability under TRAIGA is contingent on whether a party intended to discriminate; evidence of “disparate impact” is not enough to demonstrate such an intent. Tex. Bus. & Comm. Code § 552.056(c). In other words, if an AI system ends up having a negative effect on a protected group, that fact alone does not constitute a violation of TRAIGA (although it could be used as evidence of a violation).
At first glance, this section may not seem to apply to the average company acting in good faith. But it is still worth considering because TRAIGA puts a continuing burden on companies developing and deploying AI systems. The requirement that a violation of this section be intentional might shield the average company from liability, but TRAIGA’s breadth—and TRAIGA’s instruction that it should be read broadly to accomplish its goal—should give companies pause and invite them to consider implementing policies and procedures to ensure that any discrimination caused by their development or deployment of AI (whether intentional or unintentional) is quickly identified. At the same time, companies should also consider devising a plan to respond to any discovered discrimination so that it can be quickly cured.
Like TRAIGA’s disclosure requirement, this section applies to almost any company developing or deploying AI systems; TRAIGA only recognizes limited exceptions for (1) insurers (subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance) and (2) federally insured financial institutions that comply with applicable banking laws and regulations. Tex. Bus. & Comm. Code § 552.056(d)-(e).
TRAIGA prohibits AI systems that incite or encourage harm
TRAIGA also prohibits developing or deploying AI systems that manipulate behavior to incite self-harm, harm to others or criminal activity. Tex. Bus. & Comm. Code § 552.052. As above, the Act is broadly construed to effectuate its purpose of facilitating the responsible development and use of AI systems and protecting individuals. Tex. Bus. & Comm. Code § 551.003. Accordingly, companies should consider that “harm” under this section may be interpreted liberally, particularly since “harm” is a separate sub-section from criminal activity, which could indicate that it includes harm in the civil context. In other words, TRAIGA’s prohibition on “harm” could be quite broad and apply to many different “harmful” outcomes (for example, monetary harm).
Like TRAIGA’s prohibition on discrimination, TRAIGA’s prohibition of inciting or encouraging harm is limited by intentionality. How to define a company’s “aim” for the development or deployment of an AI system is still up for debate, and the manner of the development or deployment will also be subject to scrutiny. Accordingly, companies should protect themselves from potential liability under this section by drafting clear policies and procedures establishing the manner and aim of the development and deployment of their AI systems and creating internal controls that ensure that AI use does not stray from those guidelines.
TRAIGA’s enforcement, opportunity to cure and penalties
TRAIGA does not open the door to private lawsuits; the public may lodge complaints against alleged violators through a forthcoming web portal, but only the Texas Attorney General may enforce TRAIGA. Tex. Bus. & Comm. Code §§ 552.101-552.102. However, the lack of a private right of action does not mean litigation risks under TRAIGA should be ignored or downplayed.
A potential violator cannot be immediately sued under TRAIGA; it must first receive a notice of violation from the Texas Attorney General. Tex. Bus. & Comm. Code § 552.104(a). Upon receiving that notice, the party has 60 days to cure the alleged violations, explain how the violation was cured and identify any changes made to internal policies to prevent further violations. Tex. Bus. & Comm. Code § 552.104(b). If the party does cure, it may avoid further prosecution and penalties.
If the potential violator fails to cure the alleged violation after the 60 days though, they may be liable for the following civil penalties:
- US$10,000-US$12,000 for each TRAIGA violation the court determines to be curable
- US$10,000-US$12,000 for each breach of a statement made during the 60-day cure period regarding how the violation was addressed and what changes were implemented to prevent future violations
- US$80,000-US$200,000 for each uncurable TRAIGA violation
- US$2,000-US$40,000 for each day a TRAIGA violation continues
Companies should be aware of two significant concerns arising from these penalty provisions.
First, given the complexity of AI systems, 60 days may be insufficient time to “cure” a violation of TRAIGA, particularly because a “cure” might mean that the party must substantially modify an AI system, particularly in a situation where the party is not the developer of the system. If this 60-day window is not sufficient time to alter an AI system’s programming or functionality, then a notice of violation will effectively function as a cease and desist order.
Second, TRAIGA currently does not provide clear guidance distinguishing “curable” violations from “uncurable” violations. Because AI systems are code and algorithms, a party may attempt to suggest that, given sufficient resources and time, any part of an AI system could be changed (and thus any problems “cured”), which would render the TRAIGA’s distinction of “uncurable” violations a nullity. But courts are unlikely to read that sub-section as superfluous. At this time, without the benefit of applicable TRAIGA precedent, it is unclear whether the inclusion of “uncurable” violations is an attempt to distinguish between the severity of violations, the time needed to cure or some other factor. What qualifies as an “uncurable” violation will thus be developed by experts, the courts and the resulting common law Companies should watch that space closely as precedent is established and evolves.
TRAIGA establishes a regulatory sandbox opportunity
TRAIGA is not just about duties and penalties; it also provides a potentially valuable opportunity for companies developing or deploying AI systems in the form of a state-approved protected regulatory environment. Companies can participate in that regulatory sandbox for up to 36 months, with limited extensions available for good cause. To be admitted, companies must apply and include the following in their applications:
-
A detailed description their AI system
- Their intended AI use cases
- A benefit assessment covering consumer, privacy and public safety implications
- A risk mitigation plan
- Evidence of compliance with federal AI laws
Participants admitted to the sandbox must then submit quarterly reports detailing system performance metrics, risk mitigation updates and stakeholder feedback.
In exchange for applying and being admitted to the sandbox, participants are permitted to research, test and train AI systems without the need to obtain traditional licenses or regulatory authorizations. However, there are two important caveats related to the sandbox that companies should consider.
First, companies should be aware that annual reports will be issued to the government summarizing the program’s activity, which will include information from and about participants. Tex. Bus. & Comm. Code § 553.103. The reporting agency is required to safeguard participating companies’ intellectual property, trade secrets and other sensitive information when doing so, but companies should be aware that information relating to participants will still be sent to the government in some form. Tex. Bus. & Comm. Code § 553.102(c).
Second, TRAIGA's core duties and prohibitions still apply to participants within the sandbox; participating in the program does not alter a company’s obligations or potential culpability for violations. Companies should therefore carefully weigh the benefits of regulatory flexibility against the administrative burden of applications and quarterly reporting, particularly given the three-year participation limit.
TRAIGA creates a safe harbor for companies that self-identify violations
Finally, TRAIGA also provides a liability safe harbor for companies, including those that do not participate in its sandbox program. That safe harbor protects, among other things, companies that discover and cure potential TRAIGA violations through:
-
Feedback from developers, deployers or other stakeholders
- Testing procedures like red-teaming or adversarial testing
- Following state agency guidelines
- Internal review processes, provided the company complies with nationally recognized AI risk management frameworks (such as NIST’s AI Risk Management Framework)
If a company discovers a potential violation in one of these ways, they cannot be found liable. Tex. Bus. & Comm. Code § 552.105(e). Further, a company cannot be found liable if someone else uses the company’s AI system in a way that violates TRAIGA. Tex. Bus. & Comm. Code § 552.105(e)(1). Additionally, the statute does not currently require the company to cure the violation within a set period to benefit from the safe harbor. TRAIGA’s safe harbor provision therefore incentivizes companies to implement their own protective measures and reporting mechanisms to ensure TRAIGA compliance.
TRAIGA establishes a CID process to investigate potential violations
Companies should also be aware that, if the Texas Attorney General receives a complaint through TRAIGA’s online reporting mechanism, TRAIGA allows the Texas Attorney General to issue (without also requiring it to issue a notice of violation) a civil investigative demand to determine if a violation has occurred. That civil investigative demand can require a company to provide, among other things:
-
AI system descriptions, including intended use, purpose and training data
- Input data details, outputs, performance metrics and known limitations
- Post-deployment monitoring and user safeguard measures
A company that develops or deploys AI must therefore be prepared to provide that information. By creating the processes and standards that will ensure that information is readily available—and making sure that information exists and is continually tracked—companies will likely get the added benefit of preventing TRAIGA violations in the first place, therefore reducing the likelihood of receiving a civil investigative demand and, if one is received, reducing the cost of compliance.
Looking ahead: Key takeaways and practical steps for companies
The passage of TRAIGA reflects Texas’s commitment to promoting AI innovation while safeguarding consumer rights. The law rewards proactive compliance efforts, following established risk management frameworks and responding promptly to potential violations. It also heavily penalizes intentional discrimination, manipulation toward harmful activities and evasion of disclosure duties. Since the proposed federal moratorium on state-level AI regulation did not pass, TRAIGA will be the law of the land for companies operating in Texas for the foreseeable future.
TRAIGA’s January 1, 2026 effective date is rapidly approaching. Companies should determine whether TRAIGA applies to their systems and work toward building the necessary policy compliance infrastructure by drafting policies, implementing technical controls and establishing audit trails. Early preparation will be essential for smooth compliance.
Even if TRAIGA does not apply to your company right now, Utah has already implemented an AI governance law, California’s and Colorado’s versions of AI governance laws go into effect in 2026 and other states will likely follow Texas’s lead. At the same time, the usage and expansion of AI systems will only continue. Sooner or later, companies with multi-state or national operations will be subject to AI governance regulations. So even if TRAIGA does not impose obligations on your company on January 1, it might be beneficial to begin implementing policies and procedures that align with its objectives now so as to avoid conflicts and problems in the future.
Our team will monitor these developments closely. For advice about TRAIGA's practical implications or representation related to TRAIGA, please contact our team.