Your company’s most valuable assets may not appear on your balance sheet. They’re in your systems, your processes, your technology, and your people. Trade secrets don’t require registration and don’t expire, but they only remain protected if you actively safeguard them.

This video explains what qualifies as a trade secret under U.S. law and how to know if your company is doing enough to protect its most valuable information.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #tradesecrets #businesslaw #ipprotection #BillWagnerLaw #InnovationLaw

Vendors tout cloud software as a cheaper alternative to traditional on-premise solutions.

  • While cloud solutions can often be implemented at a lower cost, the cost of accessing and using the cloud solution over the life-cycle of the product is often more expensive than an on-premise solution.

I discuss these issues in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

When a lawsuit hits your manufacturing business, the last thing you want is uncertainty about your insurance coverage. 

In this video, I’ll walk you through how to position your company to recover fast and fully when facing legal trouble. If your operations are evolving, your insurance plan should be too.

#insurancecoverage #LitigationStrategy #businesslaw #riskmanagement #commerciallitigation #manufacturingindustry #cyberinsurance #ProductLiability #legalinsights

Stay Connected with Us!

👉 WWagner@taftlaw.com

👉Dir: 317.713.3614 | Cell: 317.431.5979

👉Tel: 317.713.3500 | Fax: 317.715.4537

👉One Indiana Square, Suite 3500 Indianapolis, Indiana 46204-2023

👉 Website: https://www.taftlaw.com/people/willia…

Want to see if we can help you with your charges? 📞 Reach out today for a consultation!

The likelihood that your ERP project will take longer than expected, be more challenging than anticipated, and cost more than initially estimated is exceptionally high.

  • Common causes of ERP project failure include inadequate requirements gathering, unrealistic timelines, insufficient testing, and resistance to change.
  • What can you do to maximize success?

I talk about this in my latest video.

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination

CONTACT ME AND MY TEAM: https://www.taftlaw.com/people/marcus…

🔗 EXPLORE OUR LATEST RESOURCES: Taft Technology and AI Blog: https://www.tafttechlaw.com/

ERP Resource page: https://softwarenegotiation.com/erp-r…

Software Negotiation Checklist: https://softwarenegotiation.com/softw…

ERP Negotiation Tips: https://softwarenegotiation.com/tips-…

Common Reasons Why ERP Implementations Fail: https://softwarenegotiation.com/commo…

Key Provisions In An ERP Contract: https://softwarenegotiation.com/draft…

📱 CONNECT WITH ME: LinkedIn: @marcusharris1 Instagram: 1marcusharris X: @softwarelawyer TikTok @1marcusharris

📩 Got Questions? Contact me: mharris@taftlaw.com

AI is now deeply embedded in global supply chains, forecasting, and business decision-making, especially in the Manufacturing industry.

In this video, I explain how manufacturers and tech-driven businesses can use insurance as a powerful risk management tool in disputes involving AI, robotics, and automation. If your company is using AI or system integration in critical operations, this video will show you how to protect your business and preserve leverage in case things go wrong.

#privacylaw #datasecuritylaw #whitecollardefense #defenselaw #classactionlawsuits #ailawsuits #fraudprevention #aifraud #AilegalRisks #insurancecoverage #TechLitigation #businessprotection

Martin Edwards, vice president of Taft’s Public Affairs Strategies Group in Taft’s Washington, D.C. office, contributed to this post.

On July 23, 2025, the White House published “America’s AI Action Plan,” which sets a public policy framework targeting the United States’ technological leadership in artificial intelligence. The AI Action Plan, coupled with three executive orders to further enforce the U.S. AI policy, is designed to “lead the world in AI.” President Donald Trump’s administration aims to cement U.S. global dominance in what it deems the three pillars of AI policy: innovation, infrastructure, and international diplomacy and security. Below, Taft analyzes core elements of each pillar and highlights the legal and regulatory considerations relevant to businesses, investors, and public entities.

Pillar 1: Accelerate AI Innovation

The AI Action Plan emphasizes deregulation. The Trump administration proposes removing federal “red tape,” revisiting and rolling back regulations deemed burdensome for AI innovation. Agencies must identify, revise, or repeal rules inhibiting development or deployment, and funding may be limited for states with restrictive AI regimes. In a speech at an AI summit introducing the executive orders outlined in the AI Action Plan, Trump emphasized the need for one federal AI standard and not several distinctive state standards. Although not addressed in the plan explicitly, the federal government will likely seek to deregulate through a state AI legislative moratorium similar to the one abandoned earlier this month. Such a state legislative moratorium will require an act of Congress, which would require extensive time and negotiations to address issues identified during the budget reconciliation process, where the effort first took shape.

Although the repeal of regulations deemed to be inhibiting AI is a top priority, the AI Action Plan outlines additional White House values, including:

  • Free Speech and Objectivity: AI systems procured or supported by the federal government should protect free speech and maintain objectivity, avoiding “top-down ideological bias.”
  • Open-Source AI: The AI Action Plan explicitly supports open-source and open-weight AI models. Federal partnerships will be leveraged to increase access to computing resources and foster a robust ecosystem for startups, researchers, and academic institutions.
  • Sectoral AI Adoption: Regulatory sandboxes and “Centers of Excellence” should enable experimentation, with special attention to health care, energy, agriculture, and defense.
  • Manufacturing & IP Protections: Vigorous investment is planned for next-generation manufacturing (chips, robotics, drones) and protecting commercial/governmental innovations from interference or theft.
  • Synthetic Media & Legal Standards: Legal reforms are anticipated for combating deepfakes and synthetic media, both by developing forensic standards and by adapting evidentiary requirements in the justice system.

Pillar 2: Build American AI Infrastructure

AI requires an extensive amount of energy to power it. Accordingly, the AI Action Plan emphasizes building and maintaining vast AI infrastructure along with the power needed to facilitate processing. The plan proposes categorical exclusions under federal environmental laws for data centers and AI-critical energy infrastructure, as well as fast-tracked permitting for chip fabrication and energy projects. During his remarks, Trump repeatedly emphasized how nuclear power can be a safe and effective means to of providing such power.

Recognizing AI’s power demands, the AI Action Plan calls for grid upgrades, making federal land available, supporting the interconnection of dispatchable power sources and new energy generation projects (nuclear, geothermal), electric grid optimization, and harmonizing resource adequacy standards. The plan also intends to streamline or reduce regulations promulgated under the Clean Air Act, Clean Water Act, Comprehensive Environmental Response, Compensation and Liability Act, and other related laws. In addition, the policy focuses on bringing advanced semiconductor manufacturers to U.S. soil and emphasizing the removal of nonessential policy requirements for grant aid and fostering AI-driven chip production. As far as the labor market goes, national initiatives are envisioned to identify high-priority AI-related infrastructure jobs, develop skills frameworks, and expand apprenticeships and technical education.

The AI Action Plan also focuses on information security and incident response in AI as part of its infrastructure priorities. Technical standards will be established for high-security AI data centers with an emphasis on classified computer environments to protect national security intelligence.

Pillar 3: Lead in International AI Diplomacy and Security

The AI Action Plan aims to establish American AI as the worldwide gold standard and ensure international allies are building on U.S. technology. The Trump administration intends to export the entire tech stack (hardware, models, software, and standards) to partners and allies through economic diplomacy, trade support, and technology alliances. U.S. representatives will be charged with resisting “authoritarian” or foreign policy influences — particularly from China — in multilateral AI governance forums. In addition, the plan indicates that more robust enforcement and monitoring of AI computer and semiconductor exports are planned, including new controls on subsystems and international legal alignment.

National security risk governance is also addressed in the AI Action Plan. The Trump administration plans to task federal agencies with evaluating security risks associated with “frontier” AI models, with an eye toward both cyber and CBRNE (chemical, biological, radiological, nuclear, and explosives) dangers. Likewise, new requirements will be placed on recipients of federal scientific funding to use only vendors adopting rigorous screening protocols for nucleic acid synthesis — addressing biothreat and dual-use concerns.

Implications for U.S. Stakeholders

Companies across AI, technology, defense, and infrastructure sectors must pay close attention to fast-moving regulatory change, enhanced export controls, and procurement requirements. Open-source AI and data partnerships present opportunities, while workforce and IP mandates may require new compliance protocols. Funding will increasingly hinge on regulatory attitudes toward AI, possibly incentivizing loosening restrictions and harmonizing state policies with the federal approach. Finally, expanded access to computing, revised data disclosure expectations, and a focus on next-generation science herald both opportunity and compliance challenges, especially regarding data quality and proprietary research.

Conclusion

The AI Action Plan marks a shift in federal technology and regulatory policy. Emphasizing deregulation, rapid infrastructure buildout, workforce adaptation, and global technology alliances, the proposed roadmap signals substantial changes across the public and private sectors. Businesses, public entities, and law firms should prepare for both new opportunities and enhanced compliance obligations as these initiatives roll out.

The unfortunate reality is that AI fraud is happening right now.

Our team works with businesses to help them understand the real threats posed by AI-generated emails, voice cloning, and deepfakes that are being used to commit sophisticated fraud.

This video explains how these scams work, including real-world examples like a $25 million deepfake scam, and includes practical steps your company can take to verify payment changes, train staff, and tighten contracts to reduce your liability.

Martin Edwards, vice president of Taft’s Public Affairs Strategies Group in Taft’s Washington, D.C. office, contributed to this post.

Early on July 1, the U.S. Senate voted to halt an effort to impose a 10-year moratorium on state regulation of artificial intelligence. The vote, 99-1, removed the AI provision from President Trump’s “Big, Beautiful Bill” that had evolved from a full moratorium on state AI regulation for the next decade, to its most recent iteration that required states to adopt the ban in order to receive federal broadband funding over the next five years.

Yesterday, Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas attempted to revise the AI ban to address current regulations. According to media reporting, efforts toward banning state AI regulation broke down amidst concerns that the language was overly broad and could adversely impact existing laws concerning privacy, consumer protection, and child safety.

In the last year, states have taken the lead in AI regulatory efforts as the federal government steers toward deregulation. Several state legislatures are currently debating bills to regulate AI developers, distributors, and deployers. The most prominent AI laws on the books include the California AI Transparency Act, Colorado AI Act, Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act, the Utah Artificial Intelligence Policy Act, and the Texas Responsible AI Governance Act (TRAIGA), which was signed into law last week. In addition, several of the 19 state consumer privacy laws currently in effect, or soon-to-be in effect, contain restrictions on algorithmic decision making.

The AI ban could potentially be resurrected in the U.S. House of Representatives as budget reconciliation efforts continue. However, given the decisive vote to remove the AI ban, we expect future deregulatory efforts would more likely manifest in a standalone federal preemption bill. Based on what we have learned from the most recent Senate negotiations, we expect any such effort would require sufficient federal protections as part of any compromise on state law preemption.

In the meantime, despite potential federal constraints, states will remain active players in AI regulation. As we have seen in California, Colorado, Tennessee, Texas, and Utah (and several states debating similar legislation), AI regulation is largely focused on emerging consumer protection needs and a demand for transparency and accountability in high-risk processing fields like employment, finance, health care, and education. Organizations can prepare now by adopting and improving AI governance, documenting risk assessment findings for AI tools in development or use, and working with legal counsel to evaluate how existing laws may impact AI activities.

This update is part of Taft’s White House Toolkit. Please reference the toolkit for additional cross-practice coverage of federal legislative and regulatory activity that may affect businesses or organizations.

Taft partner Marcus Harris was quoted in the TechTarget article, “SAP agrees to allow Celonis data access until case resolved,” published on June 26. In the piece, Harris provided insight into the ongoing legal dispute between SAP and Celonis, addressing the implications of SAP’s agreement to permit continued data access while litigation is pending.

Read the article here.

Harris, who is based in Taft’s Chicago office, discussed how the interim arrangement could impact both parties and set precedents for future data access negotiations in the enterprise software industry. The article highlights Harris’s experience representing clients in complex software-related disputes and his perspective on best practices for managing contractual relationships in rapidly evolving technological landscapes.

In this video, I discuss two recent ERP lawsuits involving SAP and Oracle.

  • In March 2025, Celonis sued SAP in the United States District Court for the Northern District of California, alleging in a 61-page complaint that SAP excludes process mining competitors and other third-party providers from its ecosystem.
  • According to Celonis, SAP makes it virtually impossible for its customers to work with non-SAP process mining solutions because sharing data from the SAP system with third-party solutions is subject to excessive fees.

SAP’s strategy seems to be clear: charge its customers excessive fees to access their own data.

  • This has significant implications for any company that uses third-party AI functionality with data in its SAP ERP system.

Oracle faces another ERP fraud lawsuit.

  • Earlier this year, Veronica’s Auto Insurance Services, Inc., filed suit against Oracle/Netsuite alleging fraudulent inducement, negligent misrepresentation, and breach of contract.

Does all of this sound familiar?

#erplawyer #erpcommunity #erpfailure #saps4hana #oraclecontracts #softwarelawyer #sapservices #saphanacloudplatform #saas #erpcloud #teamtaft #sapcontracts #oraclelawsuit #oraclefailure #oracletermination #saptermination