If you are moving fast with generative tools, the best legal practices when you use AI are not optional. They are what keep a promising brand from turning into a trademark dispute, a false advertising claim, or a costly rebrand.

What are the best legal practices when you use AI for branding?

The best legal practices when you use AI for branding are to keep meaningful human control, clear names and slogans before launch, verify every marketing claim, review vendor terms, document creative decisions, avoid lookalike prompts, and require legal signoff before a campaign goes live. That answer matters because the legal risk is very real. Under Section 43(a) of the Lanham Act, a business can face claims for false designations of origin, misleading descriptions, and advertising that is likely to cause confusion or misrepresent the nature or qualities of goods or services. The FTC also says advertising must be truthful, non-deceptive, and backed by evidence. Essentially, if your AI-assisted branding looks too much like someone else’s mark or makes claims you cannot support, you can still be sued.

Why AI speed creates trademark risk

Most companies do not have a legal framework for AI-assisted branding. Teams prompt a tool, pick the output they like, and move forward. That speed feels efficient, but it can hide major problems.

AI tools can produce names, slogans, logos, and product copy that sound familiar because they are built to predict likely language patterns. In branding, familiarity can be dangerous. A slogan that sounds polished may also sound too close to a competitor’s long-used phrase. A logo concept may feel distinctive inside your design meeting, but look much less original when compared against the market. Product copy may sound persuasive while quietly making claims your business has never tested or documented.

That is why the best legal practices when you use AI should begin before launch, not after someone sends a demand letter.

The law still applies even when AI helped create the brand

Some business leaders assume that if AI generated the first draft, the usual rules are somehow softer. They are not.

The Lanham Act still governs confusing branding and false advertising. The FTC still expects truthful, substantiated claims. That means the questions remain the same for Indianapolis businesses and national brands alike. Will consumers be confused? Is the claim accurate? Can you prove it? If the answer is unclear, you need more review before launch.

7 best legal practices when you use AI to build your brand

1. Keep human ownership and control

The first rule is simple. A person, not a tool, should own the decision-making process. Your team should define brand strategy, choose among outputs, revise what the AI creates, and approve the final result. Human control helps show that the brand reflects business judgment rather than an unexamined machine output.

That approach also fits with broader federal IP guidance. The USPTO has stated that AI systems are tools used by human inventors, not inventors themselves, and the U.S. Copyright Office has explained that protection turns on sufficient human authorship rather than mere prompting alone. Branding is not identical to patent or copyright law, but the practical lesson is the same: human contribution matters.

2. Clear AI-generated names, slogans, logos, and domains

Every serious brand element should go through clearance. That means screening for similar marks, similar slogans, overlapping domains, and marketplace use that could trigger confusion.

Too many teams think AI output is new because it feels new to them. That is not how trademark risk works. The real question is whether your proposed brand conflicts with rights that already exist. Clearance should happen before launch, before ad spend, and before public rollout across multiple channels.

3. Substantiate claims before they go live

This step matters just as much as trademark clearance. If AI writes product claims, performance statements, comparisons, or guarantees, someone needs to verify each one.

The FTC’s rule is direct: ads must be truthful, advertisers must have evidence to back up their claims, and ads cannot be unfair. So if your AI drafted language like “industry leading,” “clinically proven,” “fastest,” or “guaranteed,” your business should pause and ask what evidence supports that wording. If you cannot prove it, do not publish it.

4. Review vendor terms and output rights

Not every AI platform gives you the same protections. Before relying on a tool for core brand development, review who owns the output, what restrictions apply, how the provider handles training data, and whether any indemnity exists.

For enterprise users, this is often negotiable. It is better to address those issues at the contracting stage than after a dispute arises over ownership, confidentiality, or third-party claims.

5. Create an audit trail

Save prompt history, revision notes, internal comments, search results, and the reasons your team selected one option over another. Documentation serves two purposes.

First, it helps prove thoughtful human involvement. Second, it shows your business acted responsibly if a dispute later arises. In litigation, a clean record can matter. It can show you did not blindly copy, did not ignore obvious risk, and did take legal review seriously.

6. Ban lookalike prompting

One of the riskiest habits in AI branding is asking for something “like” a competitor, a market leader, or a famous campaign. That shortcut may feel harmless in a brainstorming session, but it increases the odds that the output will echo protected branding or trade dress.

A better prompt focuses on your own personality, values, audience, and differentiators. Ask for originality, not imitation. This is one of the most practical and best legal practices when you use AI because it reduces confusion risk at the source.

7. Use launch gates before anything goes public

Build a final review checkpoint into the process. No brand element, campaign asset, or AI-drafted claim should go live without a confusion check and legal signoff.

This can be lightweight for lower-risk materials and more formal for major launches. The important point is consistency. A launch gate gives your company one last chance to catch a problem before the public sees it.

A realistic example of how this can unravel

Imagine an Indianapolis software company using AI to generate a new tagline for a product rollout. The internal team loves the result because it sounds polished, modern, and memorable. The marketing department places it on the website, in paid ads, and across sales decks.

A month later, the company receives a demand letter. A competitor has used a very similar tagline for years in the same industry. Now the competitor argues the new campaign is likely to confuse customers and also points to AI-drafted performance claims in the ads that were never substantiated.

This is exactly the kind of situation that can turn a fast launch into a costly legal problem. A basic clearance search, documented human edits, and a claim review checklist might have caught the issue before launch. That risk framework tracks the seven-step structure discussed in your transcript.

Questions I hear from business teams using AI

Can I trademark a name that AI helped me create?

Possibly, but the key issue is not whether AI helped. The real issue is whether the proposed mark is distinctive, available, and lawfully used in commerce without creating confusion with someone else’s rights.

Does using AI protect me if the output copies someone else?

No. If your branding creates confusion or includes misleading claims, the legal exposure stays with the business that used the material.

Do I need a lawyer before I launch an AI-generated slogan?

For a minor internal concept, maybe not immediately. But before a public launch, legal clearance is wise, especially if the slogan will appear in advertising, on packaging, or across multiple states.

What kinds of AI-generated claims are most dangerous?

Comparative claims, performance claims, superiority claims, scientific-sounding claims, and guarantees are all high risk when they are not backed by evidence.

Should my company have a written AI branding policy?

Yes. A written policy helps marketing, legal, leadership, and outside agencies follow the same review process. It also reduces inconsistency, which is where avoidable risk often begins.

A practical policy your team can use now

If your company is serious about AI-assisted branding, your policy should at least require the following:

  • A designated owner for brand decisions
  • Pre-launch trademark and slogan clearance
  • Review of all factual and performance claims
  • Approval of vendor terms for AI tools
  • Documentation of prompts, edits, and rationale
  • A prohibition on lookalike prompting
  • A final legal review before publication

That framework is not burdensome. It is efficient risk control. In many cases, it is the difference between scaling a brand confidently and backing up under pressure.

Protect the upside before the risk gets expensive

AI can absolutely help your business move faster. But speed without legal discipline is not a growth strategy. It is exposure.

The best legal practices when you use AI allow you to keep the benefits of speed while protecting trademark value, advertising accuracy, and long-term brand ownership. When your process includes human control, clearance, substantiation, documentation, and launch review, you give your business a much stronger foundation for growth.

If you are building or refreshing a brand with AI, now is the right time to review your process. You can learn more about my experience in commercial litigation, privacy, security, and artificial intelligence matters, explore my Taft profile and contact information, and read Taft’s related guidance on AI-powered fraud and business safeguards.

The Financial Industry Regulatory Authority (“FINRA”) and the U.S. Department of the Treasury (“Treasury”) (as part of a public-private partnership) have recently issued guidance regarding the use of AI by the financial services industry. This alert summarizes certain AI-related updates from the 2026 FINRA Annual Regulatory Oversight Report (the “Report”), and the Treasury partnership’s recently published AI Lexicon and Financial Services AI Risk Management Framework.

FINRA

FINRA’s 2026 Report contains a new section specifically devoted to generative AI (“GenAI”). The Report clarifies that “FINRA’s rules… and the securities laws more generally, continue to apply when firms use GenAI or similar technologies in the course of their businesses, just as they apply when firms use any other technology or tool.”1 The Report suggests that existing rules regarding supervision, communications, recordkeeping, and fair dealing may apply to uses of GenAI by securities broker-dealers.2

The Report provides recommendations for firms contemplating GenAI solutions. Such recommendations include:

  • “Robust testing of GenAI to understand the capabilities, limitations, and performance of the model. Testing areas to consider include areas such as privacy, integrity, reliability, and accuracy.”
  • “Ongoing monitoring of prompts, responses, and outputs to confirm the GenAI solution continues to perform as expected and results in compliant behavior.”
  • “Approaches to identify and mitigate associated risks, including, but not limited to, accuracy (e.g., hallucinations) and bias.”
  • “Assessing whether the firm’s cybersecurity program appropriately contemplates: risks associated with the firm’s and its third-party vendors’ use of GenAI; and how its technology tools, data provenance, and processes identify how threat actors use AI or GenAI against the firm or its customers.”
  • “Developing supervisory processes to develop and use GenAI at an enterprise level.”
  • “Establishing a supervision, governance, or model risk management framework that establishes clear policies and procedures to develop, implement, use, and monitor GenAI, while maintaining comprehensive documentation throughout.3

In addition to the Report, Treasury has issued guidance that is instructive on the use of AI by financial institutions.

U.S. Department of the Treasury

The U.S. Department of the Treasury recently released two new resources to guide AI use in the financial sector, a shared Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF) as part of the President’s AI Action Plan,4 that focuses on “clear standards, shared understanding, and risk-based governance to ensure artificial intelligence is deployed safely and responsibly.5 “As a part of a public-private partnership, the Treasury new AI guidance resources intended to enable the “secure and resilient”6 development and use of AI across the U.S. financial system. The publications consisted of an “AI Lexicon,”7 promoting a shared AI vocabulary, and the “Financial Services AI Risk Management Framework,” (the “Framework”) which modifies the NIST AI Risk Management Framework for specific application to the financial services industry.8 These publications are two of six such deliverables that the partnership plans to publish to “provide a foundation for the use of AI in financial services, addressing governance, data practices, transparency, fraud, and digital identity in an integrated way.”9

The Treasury partnership describes the Framework as “an industry‑led, sector‑specific AI risk management framework developed through public‑private collaboration with more than 100 financial institutions and input from U.S. and international agencies, including NIST. Structurally aligned with the NIST AI RMF and expanded with 230 Control Objectives, it helps financial organizations of all sizes manage and govern AI risks while enabling responsible innovation.”10

The Framework consists of four components: (1) an AI adoption stage questionnaire; (2) a risk and control matrix; (3) a user guidebook; and (4) a control objective reference guide. The Treasury partnership also published a website which provides information related to, and at times facilitates participation in, the Framework.

Conclusion

These recent developments indicate that AI is now officially on the radar of U.S. financial authorities, regulators, and private industry groups. Though the FINRA Report, the AI Lexicon, and the Framework are all styled as non-binding guidance, they may nevertheless indicate a trend toward increased scrutiny of AI practices and form standards against which financial services companies could be evaluated.

If your business provides financial products or services and utilizes or plans to onboard any AI technologies, now is the time to begin developing an AI compliance strategy and program that takes into account the above guidance in addition to new and emerging state AI and data protection laws.

The Taft’s Finance and Privacy, Security, and AI and FinTech practice groups have experience helping clients in the financial services industry develop risk-based compliance strategies for AI and other data protection laws, regulations, and standards.

Footnotes:

  1. 2026 FINRA Annual Regulatory Oversight Report at p. 24. ↩︎
  2. Id. ↩︎
  3. All quoted language in bullets from the 2026 FINRA Annual Regulatory Oversight Report at 26. ↩︎
  4. Winning the Race: America’s AI Action Plan (July 2025). ↩︎
  5. U.S. Department of the Treasury, Treasury Releases Two New Resources to Guide AI Use in the Financial Sector. ↩︎
  6. Financial Services Sector Coordinating Council, Financial Sector Artificial Intelligence Executive Oversight Group Deliverables (Feb. 19, 2026). ↩︎
  7. Financial Services Coordinating Council, Artificial Intelligence Executive Oversight Group AI Lexicon (Feb. 2026). ↩︎
  8. Cyber Risk Institute, Financial Services AI Risk Management Framework (Feb. 2026). ↩︎
  9. U.S. Department of the Treasury, Treasury Releases Two New Resources to Guide AI Use in the Financial Sector. ↩︎
  10. Cyber Risk Institute AI Risk Management Framework. ↩︎

AI lawsuits are increasing as businesses use AI-generated logos and brand names without trademark clearance. Learn how the Lanham Act applies and how Taft can help mitigate risk.

Artificial intelligence is transforming modern branding. Companies now use AI to generate product names, logos, taglines, social media campaigns, and even full-scale brand launches in minutes.

The efficiency is undeniable. The legal exposure, however, is often overlooked.

As discussed in recent legal commentary, businesses increasingly face AI lawsuits tied to trademark infringement, false designation of origin, and intellectual property disputes stemming from AI-generated branding.

At Taft, attorneys working at the intersection of intellectual property, emerging technology, and business litigation are seeing firsthand how AI adoption is outpacing risk management.

#AILawsuits #TrademarkLaw #ArtificialIntelligence #IntellectualProperty #LanhamAct #TaftLaw #industriallawyer #insurancelawyer #commericallitigation #privacylaw #datasecuritylaw #whitecollardefense #defenselaw #classactionlawsuits #AIlawsuits #AIdatabreach

Insurance coverage for trademark infringement lawsuits is far narrower than most executives realize.

In this video, Bill Wagner, a partner in Taft’s Indianapolis office, explains what CGL policies may cover, why willfulness allegations destroy coverage, and how insurer-appointed defense counsel can put companies at risk.

Under new regulations effective January 1, 2026, California regulators now expect businesses to conduct an annual “cybersecurity audit” that assesses “how the business’s cybersecurity program protects personal information from unauthorized access, destruction, use, modification, or disclosure; and protects against unauthorized activity resulting in the loss of availability of personal information.”

Now is the time to prepare for these requirements.

As explained below, these requirements are detailed and contemplate a rigorous, professional, independent, evidence-based audit. Audit results must be shared with the California regulator under penalty of perjury.

Applicability & Distinction from Risk Assessments

California cybersecurity audit requirements apply generally to businesses which process the personal information of at least 250,000 consumers or households (50,000 if sensitive personal information), or any business that derives 50% or more of its revenue from “sale” or “sharing” of data. The cybersecurity audit requirements apply to any business which meets the volume and activity thresholds. Businesses should likely consider whether they may meet these requirements given their processing of any California-origin data for any reason (e.g., website visitors, customer data, etc.).

The cybersecurity audit requirements stand separately and distinctly from California risk assessment requirements. Unlike risk assessments, cybersecurity audits generally do not aim to assess specific processing activity separately. Instead, the cybersecurity audit aims to assess the quality of the overall cybersecurity program. The implicit assumption of the regulation therefore seems to be that businesses have such an over-arching cybersecurity program, and that such extend protections to all California resident information. That program will be assessed by the cybersecurity audit.

Timing

The regulation contemplates that larger businesses (gross revenue > $100MM) will be the first to submit a comprehensive cybersecurity audit report in April of 2028, for the period January 1, 2027-January 1, 2028. Eventually, however, all businesses meeting applicability requirements will need to conduct and submit requirements. After the first submission, the expectation is that audit reports will need be submitted annually thereafter in April for the preceding year.

Businesses should strongly consider conducting an advance cybersecurity assessment – a “mock” audit – in 2026. An advance assessment can potentially provide an opportunity to identify and repair certain issues before mandatory audit and submission to the California regulator.

Auditors – Professional, Independent, and Evidence-Based

The cybersecurity audit must be conducted by a “qualified, objective, independent professional, using procedures and standards accepted in the profession of auditing.” Auditors must have both specific cybersecurity knowledge and knowledge of “how to audit a business’s cybersecurity program.” The regulations give auditors real authority to compel the business to provide relevant information. Companies should think carefully about their selection of auditor under these standards.

For companies with a strong internal audit function, internal auditors are permitted. However, crucially, the lead internal auditor must report directly to a member of the business’s executive team who does not have direct responsibility for the business’s cybersecurity program. This likely means that the cybersecurity audit function cannot fall under the information security organization itself or be the responsibility of the CISO.

The regulation contemplates that auditors will independently examine evidence to prepare their findings. Auditors may not base findings “primarily on assertions or attestations by the business’s management.”

Reliance on Other Audits

Given certain detailed audit requirements particular to California law, it is likely that existing audits conducted by a business likely will not suffice for purposes of satisfying California requirements. Businesses may, however, partially utilize and supplement existing audits, assuming adequate scope and that such audits otherwise meet California requirements. Organizations may want to consider conducting a crosswalk or mapping to identify how their existing audit frameworks correspond to California requirements.

Detailed Audit Requirements

California regulations provide a detailed list of issues and controls that must be assessed as part of the audit. This detail defies any sort of usable general summary. For very limited example, the cybersecurity audit must assess:

  • “oversight of service providers, contractors, and third parties to ensure compliance with [detailed California contracting requirement]”
  • “Personal information inventories (e.g., maps and flows identifying where personal information is stored, and how it can be accessed) and the classification and tagging of personal information (e.g., how personal information is tagged and how those tags are used to control the use and disclosure of personal information)”; and
  • “Internal and external vulnerability scans, penetration testing, and vulnerability disclosure and reporting (e.g., bug bounty and ethical hacking programs)”

Among many other detailed requirements. The audit report must detail gaps, weaknesses and remediation plans in areas covered.

Submission Under Penalty of Perjury

Once completed, the cybersecurity audit must ultimately be certified by an executive responsible for the audit and knowledgeable enough to provide accurate information. This executive will need to submit the audit to the California regulatory under penalty of perjury. Submission in this form strongly suggests that the executive may be held personally liable for inaccuracies, perhaps especially if deemed to be intentional falsehoods. Companies should probably anticipate that in the event of any adverse interaction with the regulator, their past audit reports may become a particular point of regulatory scrutiny, and certifying executives may be asked to give testimony.

Legal Support

Experienced counsel can help businesses prepare in at least a few different ways for cybersecurity audits. First, counsel can help assess and confirm applicability requirements. Counsel can help bridge the gap between regulatory text and implementation by working with internal or external auditors who validate audit design and ensure that detailed audit requirements are understood. Once an audit report is prepared in draft form, experienced counsel can advise on the final form of the document that will ultimately need to be submitted to an increasingly active and punitive privacy regulator.   For more information, please do not hesitate to contact a member of Taft’s Privacy, Security, and Artificial Intelligence team.     

Under newly implemented regulations of the California Consumer Privacy Act (CCPA), California now requires a formal risk assessment “before initiating any processing activity” of certain (sensitive) sorts. The regulation explicitly contemplates that businesses will complete risk assessments now, in 2026.

Eventually, such risk assessments – including those completed this year – must be signed by an executive and submitted to the California regulator under penalty of perjury.

Businesses and executives subject to the CCPA must prepare now to address these requirements. In particular, the regulation may impact businesses and services including SaaS/technology firms, payments or financial technology solutions, services, consumer services, employment or HR applications, AI solutions, or other processing involving California resident data.

The statute and regulations arguably provide for some narrowly tailored exceptions. Exceptions may include financial information subject to the Gramm-Leach-Bliley Act, certain limited employment-related uses, and/or certain health care institutions/information uses governed by HIPAA. However, relevant companies should consult competent legal counsel to assess whether they may fall within the scope of such an exception before relying on it.

Certain key requirements are noted below.

Any Processing of “Sensitive” Personal Information
Businesses must conduct a risk assessment of any processing of “sensitive” personal data. Such “sensitive” data includes:

  • SSN, driver’s license, state ID card, or passport number.
  • Financial account, debit card, or credit card numbers in combination with any required security or access code, password, or credentials allowing access to an account.
  • Precise geolocation of a consumer.
  • Race, ethnicity, citizenship or immigration status, religion or philosophy, or union membership.
  • Mail, email, or text messaging content (apparently including messages sent to the consumer).
  • Individual genetic data.
  • Neural data and/or measurements.
  • Biometric information processed for identification purposes.
  • Personal information collected or analyzed regarding health, sex life, or sexual orientation.
  • Children’s data (< 16 years of age).

Note that some of these categories (e.g., messaging content) may be trivially easy to meet for almost any business that interacts with or provides consumer services. Impacts are only potentially heightened for businesses operating with sensitive data, such as financial, health, or payment data.

ADMT: Use for a Significant Decision & Training

Businesses must also conduct a risk assessment regarding the use of “automated decision-making technology” (ADMT) for a “significant decision.” ADMT can include artificial intelligence technologies or technology intended to replace human involvement. An ADMT makes a “significant decision” when the decision “results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or health care services.”

Business may be in scope of risk assessment requirements if, e.g., its products, services, or automated activities involve:

  • Providing risk scoring or assessments, or otherwise helping decide when to extend credit or a loan, to exchange funds, offer housing, or to set up installment payment plans.
  • Searching and sorting job candidates into an auto-reject category.
  • AI-based screening for health care services.
  • Other “significant decision.”

Risk assessments are also required for certain uses of personal information to train an ADMT that will be used for significant decisions, including facial recognition, emotion recognition, and/or identity verification.

Selling or Sharing Data

Businesses must conduct a risk assessment when “selling” or “sharing” data within the meaning of California law. Based on statutory definitions and prior enforcement by California authorities, note that “selling” and “sharing” can include ordinary online tracking and analytics, technologies common across many commercial websites. “Selling” and “sharing” can include other common activities, such as service provider arrangements that are not subject to the strict contractual controls under California law limiting personal data use. For consumer finance businesses, the regulation also specifically notes as an example that consumer budgeting calculators may involve regulated data “sharing” if, e.g., including a third-party advertisement.

Automated Processing to Infer

Businesses must conduct a risk assessment before using automated processing to infer certain categories of information related to a consumer, including economic situation, behavior, personal preferences, or interests. Businesses should consider a risk assessment given the use of AI or other technology to assess job candidates, perform analytics, or form other assessments of individuals regarding “intelligence, ability, aptitude, performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, or movements in a sensitive location

Conclusion

The updates to the CCPA regulations went into effect on Jan. 1, and businesses may now be required to perform a risk assessment before commencing the relevant processing. Content requirements for risk assessments are detailed and require identifying potential harms to consumers, offsetting benefits, and mitigating factors. The regulation provides detailed guidance on both the substance and the form of such assessments; existing assessment procedures are unlikely to meet California requirements unless specifically designed to do so. Finally, as noted above, assessments will ultimately have to be submitted to the California privacy regulator by a managing executive, along with a written statement under penalty of perjury that the risk assessment information submitted is true and correct.

For more information about the updated CCPA requirements, contact a member of Taft’s  Privacy, Security, and Artificial Intelligence and Technology and Artificial Intelligence groups.

Warranties and representations, ownership of intellectual property, limitations of liability, and indemnity are among the most important issues when negotiating a software contract with an AI Vendor.

  • What’s reasonable?
  • What should you ask for?

That’s what I talk about in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

Artificial Intelligence (AI) is rapidly transforming the business world, moving from a niche technology to an integral part of operations across nearly every industry. Whether you are acquiring a technology company or simply using AI services such as customer chatbots or data analysis programs, businesses are being exposed to a new class of legal risks. To address these unique challenges, businesses and investors are increasingly including AI-specific representations and warranties in contracts and agreements. These clauses are becoming a crucial method for effectively allocating and mitigating AI-related uncertainty.

Risks, Benefits, and Key Considerations

While the benefits of AI in terms of efficiency and pattern recognition are immense, the technology also presents novel and significant legal risks. The importance of AI technology in business means that acquirers are now seeking tailored assurances even when a target’s AI use is not material to the core business, recognizing that any unmanaged risk can lead to future liability.

Key AI Risks and Considerations

  • Intellectual Property (IP) Infringement and Ownership: AI models, particularly GenAI tools, are trained on vast datasets that may contain web-scraped data, images, text, or other content protected by copyright. Developers are increasingly facing allegations that their tools were trained by “ingesting protected content without a license.” This creates a risk of infringement claims against both the developer and the user of such AI tools. There also remains uncertainty over the extent of IP protection for both AI inputs and content created by AI.
  • Data Quality and Bias: AI outputs are only as good as the inputs. If the data used to train an AI model is inaccurate, biased, or otherwise flawed, the resulting model and its outputs may be equally flawed. Users of AI services should ask how the data used to train the model was sourced. This risk must be addressed through careful due diligence.
  • Data Privacy: Large datasets used to train machine learning models may inadvertently incorporate personal, sensitive, or inaccurate information. Public generative AI tools cannot guarantee deletion or non-retention, as any information submitted is used to further refine the model. Beyond the risks associated with personal information, companies need to ensure that their employees do not disclose sensitive company IP or customer information to a publicly accessible system.

Types of AI Representations and Warranties

AI-specific representations and warranties that go beyond standard IP and technology warranties are a method for buyers and investors to obtain contractual assurances that risks unique to AI have been addressed. These specialized clauses can tailor risk allocation by assigning responsibility for AI-specific issues to the seller, backstop the buyer’s due diligence by providing contractual assurances, and offer a clear path for recourse (like indemnity) if a post-transaction lawsuit arises. The inclusion of AI clauses can help protect investments and mitigate exposure.

Common Topics Addressed by AI Clauses

  • Data Use and Training Data: Warranties concerning the target company’s rights to use data for AI training and assurances as to the source, accuracy, and ownership of the training data set are being more frequently utilized. A clause may require a specific representation that the AI model was trained only with permissioned data (i.e., data that was obtained through legally binding consent or licenses for use). For companies using third-party GenAI, a representation may require the disclosure of the specific tools being utilized and the terms of the applicable license.
  • Intellectual Property: Clauses certifying ownership of AI-specific assets, such as algorithms, models, and parameters, can be included in contracts. For example, a clause may state that the user or licensee will own the IP for any works generated by the AI model, especially when a model is used for product design or content creation. These clauses may also address the risk of infringement associated with a model’s training and output, such as through indemnification provisions in service contracts.
  • Governance and Compliance: Contracts may now include assurances that there are internal AI governance frameworks, including documented policies for testing and monitoring, “human in the loop” requirements, and that the entity complies with any applicable AI laws and regulations. A representation might be that no AI models or platforms were utilized in the generation of a product, or that all employees have signed a data use agreement that prohibits entering any company information into GenAI models.

Conclusion

The market for generative and agentic AI is expanding rapidly, and AI clauses will only become more common. Failing to understand the AI utilization of a business and to address AI risk contractually is an enterprise-level weakness. Downstream non-compliant service providers or contractors may taint any upstream use of data and create liability. Companies must be prepared to answer questions about their use of AI tools and processes. Even if not contemplating a sale or acquisition in the near future, questions about AI use are now appearing as part of the underwriting and renewal process for certain liability and cyber insurance policies. Close reading of any contractual provisions relating to the use of AI or AI-generated data is necessary, as is ensuring compliance with existing restrictions.

Protecting trade secrets starts with preparation. Building strong systems and habits helps keep valuable information secure and limits the risk of leaks.

  1. Inventory Trade Secrets: List what information is confidential and record its value. Keeping good records helps if you ever need to prove your rights.
  2. Regular Employee Training: Teach employees how to recognize and handle trade secrets. Refresh this training regularly so protocols stay top of mind.
  3. Implement Strong Agreements: Have anyone with access sign clear contracts that set expectations during and after their time with your business. Written agreements make enforcement easier if there’s ever a problem.
  4. Control the Use of AI Tools: Limit the use of confidential data in public AI tools. Use only secure, approved systems for handling private information.
  5. Enhance System Security: Enable safeguards such as multifactor authentication, monitor for threats, and block bots and unapproved software to guard against leaks.
  6. Prepare for Employee Exits: Clarify the company’s right to review devices and accounts upon an employee’s departure. Address relevant rules and dispute procedures in advance.

Many companies review trade secret protection only after problems arise, but taking these steps now can help reduce the risk of losing valuable information. In this video, I explain how to safeguard your company before a breach occurs.


#LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #BillWagnerLaw #aiandlaw #intellectualproperty #dataprotection #artificialintelligence

ERP vendors are notorious for creating a false sense of urgency with arbitrary support deadlines, promises of expanded functionality, and artificial intelligence to force customers to the Cloud.

  • Vendors are not pushing you to the Cloud for your benefit.
  • Just because a vendor is sunsetting support doesn’t mean you don’t have options.
  • One of the worst options is paying the vendor a premium for extended support beyond the drop-dead date.

The reality is that the Cloud is not always better; it can be detrimental.

  • If you have a highly customized system that incorporates your business processes and provides you with a competitive advantage, moving to the cloud could be detrimental.

Do you really need to move to the Cloud? The answer might surprise you.

CONTACT ME AND MY TEAM

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination