Warranties and representations, ownership of intellectual property, limitations of liability, and indemnity are among the most important issues when negotiating a software contract with an AI Vendor.

  • What’s reasonable?
  • What should you ask for?

That’s what I talk about in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

Artificial Intelligence (AI) is rapidly transforming the business world, moving from a niche technology to an integral part of operations across nearly every industry. Whether you are acquiring a technology company or simply using AI services such as customer chatbots or data analysis programs, businesses are being exposed to a new class of legal risks. To address these unique challenges, businesses and investors are increasingly including AI-specific representations and warranties in contracts and agreements. These clauses are becoming a crucial method for effectively allocating and mitigating AI-related uncertainty.

Risks, Benefits, and Key Considerations

While the benefits of AI in terms of efficiency and pattern recognition are immense, the technology also presents novel and significant legal risks. The importance of AI technology in business means that acquirers are now seeking tailored assurances even when a target’s AI use is not material to the core business, recognizing that any unmanaged risk can lead to future liability.

Key AI Risks and Considerations

  • Intellectual Property (IP) Infringement and Ownership: AI models, particularly GenAI tools, are trained on vast datasets that may contain web-scraped data, images, text, or other content protected by copyright. Developers are increasingly facing allegations that their tools were trained by “ingesting protected content without a license.” This creates a risk of infringement claims against both the developer and the user of such AI tools. There also remains uncertainty over the extent of IP protection for both AI inputs and content created by AI.
  • Data Quality and Bias: AI outputs are only as good as the inputs. If the data used to train an AI model is inaccurate, biased, or otherwise flawed, the resulting model and its outputs may be equally flawed. Users of AI services should ask how the data used to train the model was sourced. This risk must be addressed through careful due diligence.
  • Data Privacy: Large datasets used to train machine learning models may inadvertently incorporate personal, sensitive, or inaccurate information. Public generative AI tools cannot guarantee deletion or non-retention, as any information submitted is used to further refine the model. Beyond the risks associated with personal information, companies need to ensure that their employees do not disclose sensitive company IP or customer information to a publicly accessible system.

Types of AI Representations and Warranties

AI-specific representations and warranties that go beyond standard IP and technology warranties are a method for buyers and investors to obtain contractual assurances that risks unique to AI have been addressed. These specialized clauses can tailor risk allocation by assigning responsibility for AI-specific issues to the seller, backstop the buyer’s due diligence by providing contractual assurances, and offer a clear path for recourse (like indemnity) if a post-transaction lawsuit arises. The inclusion of AI clauses can help protect investments and mitigate exposure.

Common Topics Addressed by AI Clauses

  • Data Use and Training Data: Warranties concerning the target company’s rights to use data for AI training and assurances as to the source, accuracy, and ownership of the training data set are being more frequently utilized. A clause may require a specific representation that the AI model was trained only with permissioned data (i.e., data that was obtained through legally binding consent or licenses for use). For companies using third-party GenAI, a representation may require the disclosure of the specific tools being utilized and the terms of the applicable license.
  • Intellectual Property: Clauses certifying ownership of AI-specific assets, such as algorithms, models, and parameters, can be included in contracts. For example, a clause may state that the user or licensee will own the IP for any works generated by the AI model, especially when a model is used for product design or content creation. These clauses may also address the risk of infringement associated with a model’s training and output, such as through indemnification provisions in service contracts.
  • Governance and Compliance: Contracts may now include assurances that there are internal AI governance frameworks, including documented policies for testing and monitoring, “human in the loop” requirements, and that the entity complies with any applicable AI laws and regulations. A representation might be that no AI models or platforms were utilized in the generation of a product, or that all employees have signed a data use agreement that prohibits entering any company information into GenAI models.

Conclusion

The market for generative and agentic AI is expanding rapidly, and AI clauses will only become more common. Failing to understand the AI utilization of a business and to address AI risk contractually is an enterprise-level weakness. Downstream non-compliant service providers or contractors may taint any upstream use of data and create liability. Companies must be prepared to answer questions about their use of AI tools and processes. Even if not contemplating a sale or acquisition in the near future, questions about AI use are now appearing as part of the underwriting and renewal process for certain liability and cyber insurance policies. Close reading of any contractual provisions relating to the use of AI or AI-generated data is necessary, as is ensuring compliance with existing restrictions.

Protecting trade secrets starts with preparation. Building strong systems and habits helps keep valuable information secure and limits the risk of leaks.

  1. Inventory Trade Secrets: List what information is confidential and record its value. Keeping good records helps if you ever need to prove your rights.
  2. Regular Employee Training: Teach employees how to recognize and handle trade secrets. Refresh this training regularly so protocols stay top of mind.
  3. Implement Strong Agreements: Have anyone with access sign clear contracts that set expectations during and after their time with your business. Written agreements make enforcement easier if there’s ever a problem.
  4. Control the Use of AI Tools: Limit the use of confidential data in public AI tools. Use only secure, approved systems for handling private information.
  5. Enhance System Security: Enable safeguards such as multifactor authentication, monitor for threats, and block bots and unapproved software to guard against leaks.
  6. Prepare for Employee Exits: Clarify the company’s right to review devices and accounts upon an employee’s departure. Address relevant rules and dispute procedures in advance.

Many companies review trade secret protection only after problems arise, but taking these steps now can help reduce the risk of losing valuable information. In this video, I explain how to safeguard your company before a breach occurs.


#LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #BillWagnerLaw #aiandlaw #intellectualproperty #dataprotection #artificialintelligence

ERP vendors are notorious for creating a false sense of urgency with arbitrary support deadlines, promises of expanded functionality, and artificial intelligence to force customers to the Cloud.

  • Vendors are not pushing you to the Cloud for your benefit.
  • Just because a vendor is sunsetting support doesn’t mean you don’t have options.
  • One of the worst options is paying the vendor a premium for extended support beyond the drop-dead date.

The reality is that the Cloud is not always better; it can be detrimental.

  • If you have a highly customized system that incorporates your business processes and provides you with a competitive advantage, moving to the cloud could be detrimental.

Do you really need to move to the Cloud? The answer might surprise you.

CONTACT ME AND MY TEAM

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

Your company’s most valuable assets may not appear on your balance sheet. They’re in your systems, your processes, your technology, and your people. Trade secrets don’t require registration and don’t expire, but they only remain protected if you actively safeguard them.

This video explains what qualifies as a trade secret under U.S. law and how to know if your company is doing enough to protect its most valuable information.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #ipprotection #BillWagnerLaw #InnovationLaw

Vendors tout cloud software as a cheaper alternative to traditional on-premise solutions.

  • While cloud solutions can often be implemented at a lower cost, the cost of accessing and using the cloud solution over the life-cycle of the product is often more expensive than an on-premise solution.

I discuss these issues in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

When a lawsuit hits your manufacturing business, the last thing you want is uncertainty about your insurance coverage. 

In this video, I’ll walk you through how to position your company to recover fast and fully when facing legal trouble. If your operations are evolving, your insurance plan should be too.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #manufacturingindustry #cyberinsurance #ProductLiability #legalinsights

Stay Connected with Us!

👉 WWagner@taftlaw.com

👉Dir: 317.713.3614 | Cell: 317.431.5979

👉Tel: 317.713.3500 | Fax: 317.715.4537

👉One Indiana Square, Suite 3500 Indianapolis, Indiana 46204-2023

👉 Website: https://www.taftlaw.com/people/willia…

Want to see if we can help you with your charges? 📞 Reach out today for a consultation!

The likelihood that your ERP project will take longer than expected, be more challenging than anticipated, and cost more than initially estimated is exceptionally high.

  • Common causes of ERP project failure include inadequate requirements gathering, unrealistic timelines, insufficient testing, and resistance to change.
  • What can you do to maximize success?

I talk about this in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

CONTACT ME AND MY TEAM: https://www.taftlaw.com/people/marcus…

🔗 EXPLORE OUR LATEST RESOURCES: Taft Technology and AI Blog: https://www.tafttechlaw.com/

ERP Resource page: https://softwarenegotiation.com/erp-r…

Software Negotiation Checklist: https://softwarenegotiation.com/softw…

ERP Negotiation Tips: https://softwarenegotiation.com/tips-…

Common Reasons Why ERP Implementations Fail: https://softwarenegotiation.com/commo…

Key Provisions In An ERP Contract: https://softwarenegotiation.com/draft…

📱 CONNECT WITH ME: LinkedIn: @marcusharris1 Instagram: 1marcusharris X: @softwarelawyer TikTok @1marcusharris

📩 Got Questions? Contact me: mharris@taftlaw.com

AI is now deeply embedded in global supply chains, forecasting, and business decision-making, especially in the Manufacturing industry.

In this video, I explain how manufacturers and tech-driven businesses can use insurance as a powerful risk management tool in disputes involving AI, robotics, and automation. If your company is using AI or system integration in critical operations, this video will show you how to protect your business and preserve leverage in case things go wrong.

#privacylaw #datasecuritylaw #whitecollardefense #defenselaw #classactionlawsuits #ailawsuits #fraudprevention #aifraud #AilegalRisks #insurancecoverage #TechLitigation #businessprotection

Martin Edwards, vice president of Taft’s Public Affairs Strategies Group in Taft’s Washington, D.C. office, contributed to this post.

On July 23, 2025, the White House published “America’s AI Action Plan,” which sets a public policy framework targeting the United States’ technological leadership in artificial intelligence. The AI Action Plan, coupled with three executive orders to further enforce the U.S. AI policy, is designed to “lead the world in AI.” President Donald Trump’s administration aims to cement U.S. global dominance in what it deems the three pillars of AI policy: innovation, infrastructure, and international diplomacy and security. Below, Taft analyzes core elements of each pillar and highlights the legal and regulatory considerations relevant to businesses, investors, and public entities.

Pillar 1: Accelerate AI Innovation

The AI Action Plan emphasizes deregulation. The Trump administration proposes removing federal “red tape,” revisiting and rolling back regulations deemed burdensome for AI innovation. Agencies must identify, revise, or repeal rules inhibiting development or deployment, and funding may be limited for states with restrictive AI regimes. In a speech at an AI summit introducing the executive orders outlined in the AI Action Plan, Trump emphasized the need for one federal AI standard and not several distinctive state standards. Although not addressed in the plan explicitly, the federal government will likely seek to deregulate through a state AI legislative moratorium similar to the one abandoned earlier this month. Such a state legislative moratorium will require an act of Congress, which would require extensive time and negotiations to address issues identified during the budget reconciliation process, where the effort first took shape.

Although the repeal of regulations deemed to be inhibiting AI is a top priority, the AI Action Plan outlines additional White House values, including:

  • Free Speech and Objectivity: AI systems procured or supported by the federal government should protect free speech and maintain objectivity, avoiding “top-down ideological bias.”
  • Open-Source AI: The AI Action Plan explicitly supports open-source and open-weight AI models. Federal partnerships will be leveraged to increase access to computing resources and foster a robust ecosystem for startups, researchers, and academic institutions.
  • Sectoral AI Adoption: Regulatory sandboxes and “Centers of Excellence” should enable experimentation, with special attention to health care, energy, agriculture, and defense.
  • Manufacturing & IP Protections: Vigorous investment is planned for next-generation manufacturing (chips, robotics, drones) and protecting commercial/governmental innovations from interference or theft.
  • Synthetic Media & Legal Standards: Legal reforms are anticipated for combating deepfakes and synthetic media, both by developing forensic standards and by adapting evidentiary requirements in the justice system.

Pillar 2: Build American AI Infrastructure

AI requires an extensive amount of energy to power it. Accordingly, the AI Action Plan emphasizes building and maintaining vast AI infrastructure along with the power needed to facilitate processing. The plan proposes categorical exclusions under federal environmental laws for data centers and AI-critical energy infrastructure, as well as fast-tracked permitting for chip fabrication and energy projects. During his remarks, Trump repeatedly emphasized how nuclear power can be a safe and effective means to of providing such power.

Recognizing AI’s power demands, the AI Action Plan calls for grid upgrades, making federal land available, supporting the interconnection of dispatchable power sources and new energy generation projects (nuclear, geothermal), electric grid optimization, and harmonizing resource adequacy standards. The plan also intends to streamline or reduce regulations promulgated under the Clean Air Act, Clean Water Act, Comprehensive Environmental Response, Compensation and Liability Act, and other related laws. In addition, the policy focuses on bringing advanced semiconductor manufacturers to U.S. soil and emphasizing the removal of nonessential policy requirements for grant aid and fostering AI-driven chip production. As far as the labor market goes, national initiatives are envisioned to identify high-priority AI-related infrastructure jobs, develop skills frameworks, and expand apprenticeships and technical education.

The AI Action Plan also focuses on information security and incident response in AI as part of its infrastructure priorities. Technical standards will be established for high-security AI data centers with an emphasis on classified computer environments to protect national security intelligence.

Pillar 3: Lead in International AI Diplomacy and Security

The AI Action Plan aims to establish American AI as the worldwide gold standard and ensure international allies are building on U.S. technology. The Trump administration intends to export the entire tech stack (hardware, models, software, and standards) to partners and allies through economic diplomacy, trade support, and technology alliances. U.S. representatives will be charged with resisting “authoritarian” or foreign policy influences — particularly from China — in multilateral AI governance forums. In addition, the plan indicates that more robust enforcement and monitoring of AI computer and semiconductor exports are planned, including new controls on subsystems and international legal alignment.

National security risk governance is also addressed in the AI Action Plan. The Trump administration plans to task federal agencies with evaluating security risks associated with “frontier” AI models, with an eye toward both cyber and CBRNE (chemical, biological, radiological, nuclear, and explosives) dangers. Likewise, new requirements will be placed on recipients of federal scientific funding to use only vendors adopting rigorous screening protocols for nucleic acid synthesis — addressing biothreat and dual-use concerns.

Implications for U.S. Stakeholders

Companies across AI, technology, defense, and infrastructure sectors must pay close attention to fast-moving regulatory change, enhanced export controls, and procurement requirements. Open-source AI and data partnerships present opportunities, while workforce and IP mandates may require new compliance protocols. Funding will increasingly hinge on regulatory attitudes toward AI, possibly incentivizing loosening restrictions and harmonizing state policies with the federal approach. Finally, expanded access to computing, revised data disclosure expectations, and a focus on next-generation science herald both opportunity and compliance challenges, especially regarding data quality and proprietary research.

Conclusion

The AI Action Plan marks a shift in federal technology and regulatory policy. Emphasizing deregulation, rapid infrastructure buildout, workforce adaptation, and global technology alliances, the proposed roadmap signals substantial changes across the public and private sectors. Businesses, public entities, and law firms should prepare for both new opportunities and enhanced compliance obligations as these initiatives roll out.