In recent years, Illinois has become a focal point for privacy litigation, thanks in large part to the Biometric Information Privacy Act (BIPA), which has been the subject of numerous class action lawsuits. However, another Illinois privacy law, the Genetic Information Privacy Act (GIPA), has begun to attract attention from plaintiffs’ attorneys, raising concerns for employers across the state.

Enacted in 1998, GIPA, in part, regulates the collection and use of genetic information by employers in Illinois. Genetic information, as defined by GIPA, includes details from genetic tests, the presence of diseases or disorders, and genetic services. The law prohibits employers from soliciting, requesting, or requiring genetic information as a condition of employment or during the pre-employment process.

While GIPA has been on the books for over two decades, it has only recently become the target of litigation. In 2023, Plaintiffs’ attorneys filed a significant number of class action lawsuits against employers across various industries, alleging violations of GIPA. These cases often involve employers,  or an entity on their behalf, requesting family medical histories or conducting pre-employment physical examinations that allegedly touch upon genetic information.

One of the reasons for the surge in GIPA litigation is the potential for significant damages. Violations of GIPA can result in statutory penalties ranging from $2,500 to $15,000 per violation, depending on the level of negligence or intent. Moreover, prevailing parties may also be entitled to injunctive relief and attorney fees. While GIPA cases are still in their early stages, parallels can be drawn to the interpretation of BIPA by Illinois courts based on similar statutory language in the provisions describing the private right of action and recoverable damages. It is well-settled that a BIPA plaintiff need not allege any actual injury or harm to be deemed “aggrieved” under the act and recover liquidated or actual damages. It remains to be seen whether GIPA will be similarly interpreted, but it is likely given the similar language and at least one court has already made such a finding. See Bridges v. Blackstone, Inc., 2022 WL 2643968, at *3 (S.D. Ill. July 8, 2022) Employers should remain vigilant and monitor legal developments closely as the interpretation of GIPA continues to evolve.

In light of the potential risks associated with GIPA compliance, Illinois employers must take proactive steps to ensure adherence to the law. Here are some practical measures to consider:

  • Review Policies and Practices: Employers should review their policies and practices regarding the collection and use of genetic information. This includes evaluating pre-employment questionnaires, wellness programs, and any other processes that may involve genetic data.
  • Contractual Obligations: Employers who outsource services such as physical examinations should review their contracts to ensure compliance with GIPA. Additionally, insurance policies should be examined to determine coverage for potential litigation arising from GIPA violations.
  • Stay Informed: Given the evolving legal landscape surrounding GIPA, employers should stay informed about developments in case law and seek legal guidance as needed to ensure ongoing compliance.

As GIPA litigation continues to gain momentum, Illinois employers must prioritize compliance with this complex privacy law. By understanding their obligations under GIPA, implementing appropriate policies and practices, and staying informed about legal developments, employers can mitigate the risk of costly litigation and protect the privacy rights of their employees.

The EU’s pioneering AI Act, set to take effect in two years, aims to establish Europe as a global leader for trustworthy AI. It provides for enforcement of unified rules, emphasizing safety and fundamental rights. And it applies to providers and users globally, so long as the AI output is intended for EU use.

The Act defines an AI system as software that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The Act uses the above definition to categorize systems based on the amount of risk associated with using them, and the corresponding amount of work required to comply with the Act differs greatly depending on the category. For instance, under the Act, low-risk systems face transparency requirements, while high-risk ones must undergo risk assessments, adopt specific governance structures, and ensure cybersecurity. This impacts various sectors, including medical devices, recruitment, HR, and critical infrastructure.

For US businesses relying on general-purpose AI intended for use in the EU, compliance with the AI Act is crucial. They may need to provide technical documentation and summaries about training content. Larger AI systems may undergo additional testing based on size measurements.

While uncertainties surround the Act’s impact on US businesses, proactive measures involve developing and maintaining an AI governance framework. This strategy ensures responsible AI development, deployment, and risk mitigation. Components include creating an AI registry, establishing cross-functional committees, implementing robust policies, and fostering a culture of responsible AI usage. Successful implementation can enhance market share and meet rising expectations for ethical AI practices from customers, partners, and regulators.

An owner of a trade secret that has been misappropriated may seek remedies of injunctive relief and monetary damages, to compensate it for the economic harm  resulting from the party that stole and benefitted from the theft of the trade secret. While injunctive relief is the gravamen of any trade secret misappropriation claim, the available monetary damages often drive litigation strategy and often drive a plaintiff’s business, commercial, or market response to the misappropriation.

The trade secret owner may seek injunctive relief and monetary damages under the Uniform Trade Secrets Act (“USTA”), and if applicable, the federal Defend Trade Secrets Act (“DTSA,” enacted 2016), or the Economic Espionage Act (“EEA,” enacted 1996) that criminalized a theft or misappropriation of trade secrets to benefit a foreign government, or agent. Forty nine states, the District of Columbia, and the United States Territories of Puerto Rico and the Virgin Islands have adopted the UTSA (the state of New York remains the sole holdout. Remedies vary and depend upon specific language of the state’s version of the UTSA.

Consequently, rules governing the recovery of monetary damages are not uniform across the fifty two jurisdictions and can be difficult to apply. Here, we discuss monetary remedies under the UTSA and the DTSA. To ensure fairness of the monetary relief awarded to plaintiff – without interfering with lawful competition by the defendant – courts have identified the following categories of recoverable damages, and utilize one or more of these categories to arrive at the total amount awarded to the plaintiff trade secret owner:

  1. Actual Loss. Pecuniary losses recoverable by plaintiff include lost profits, including lost sales to customers that were diverted to the defendant, price erosion, and increased costs, and loss in value of the trade secret caused by defendant’s misappropriation.
  2. Unjust Enrichment. Plaintiff is entitled to recover defendant’s net profits attributable to use of the misappropriated trade secret. Plaintiff has the burden of proof that defendant profited from sales and/or other improper use of plaintiff’s trade secret, and defendant has the burden of proof for claims that its profits, or a(ny) portion thereof, were not attributable to use of plaintiff’s trade secret. A court may also consider:  costs incurred by plaintiff in the development of its trade secret; costs defendant ‘saved’ or did not incur, but nevertheless benefitted from, by misappropriating the trade secret developed by plaintiff; and defendant’s profits during the ‘lead time’ or ‘head start’ defendant gained in developing its competitive product or business.
  3. Reasonable Royalty. A reasonable royalty is the payment for a hypothetical license, that plaintiff and defendant would have negotiated at the time the defendant’s improper use of plaintiff’s trade secret began, and continuing for the duration of defendant’s use. This hypothetical negotiation scenario may rely on a calculation of the “fair market value” for the license to use the trade secret, and/or comparable licenses plaintiff may have negotiated with other parties over the relevant time period.
  4. Exemplary Damages. If defendant’s conduct in the misappropriation of plaintiff’s trade secret is willful, malicious, or in bad faith, the court may award:  (a) an additional amount not exceeding twice any award made to fairly compensate the trade secret owner for actual losses and defendant’s unjust enrichment resulting from misappropriation; and (b) attorneys’ fees.  

Comparatively, the New York statute governing trade secrets claims allows for money damages for plaintiff’s actual lost profits (defendant’s gains/profits that are not plaintiff’s actual losses are not considered), and unjust enrichment (except defendant’s ‘saved’ or avoided development costs), other costs and attorneys’ fees, and in egregious circumstances such as willful and malicious misappropriation, punitive damages (i.e., exemplary damages) in addition to damages awarded for economic harms suffered by plaintiff. New York law also allows the owner of the trade secret to seek remedies, including monetary awards, for claims of breach of contract, unfair competition, breach of fiduciary duty, the criminal offense of larceny.

The trade secret owner, as the plaintiff, must decide which specific claims to assert, and often, the venue in which they proceed. Recovery of monetary damages, and the trade secret owner’s business interests, rest upon timely decisions:  a close reading of the version of the UTSA adopted by the state in which the claims may proceed; consideration of applicable case law in that jurisdiction; potential applicability of the federal DTSA; and even the potential applicability of the EEA. Familiarity with all available options is key when time is of the essence.

On July 26, Taft partner Marcus Harris and attorney O. Joseph Balthazor Jr., offered best practices for companies using generative AI for business purposes. This webinar explored how business are using generative AI now; legal issues surrounding generative AI; regulations in place for generative AI; and more.

To watch a recording of this webinar, click here.

On May 18, 2023, the Federal Trade Commission (the “FTC”) issued a policy statement on the use of biometric information under its regulatory powers in Section 5 of the FTC Act (the “Statement”). The Statement is the strongest message the FTC has ever issued regarding how certain uses of biometric technology may, depending on the circumstances, constitute unfair and deceptive trade practices under Section 5.

The Statement provides significant insight into the FTC’s shifting priorities and focus on the regulation of the use of biometric technology, a topic that so far has been regulated by state and local law – or not at all. Companies should take heed of the FTC’s guidance for purposes of understanding potential exposure not only at the federal and state regulatory level but also in the form of potential civil lawsuits under state unfair and deceptive trade practice statutes.

The Statement

In the Statement, the FTC stated that it is committed to “combatting unfair or deceptive acts related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.” The FTC defined “biometric information” broadly, including “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” This includes, but is not limited to, “depictions images, descriptions, or recordings of an individual’s facial features, iris or retina, finger or handprints, voice, genetics, or characteristic movements or gestures (e.g., gait or typing pattern).”

The Statement recognizes several scenarios where the use of biometric technology provides “new and increasing risks.” These include (1) the use of biometric information to create counterfeit videos or recordings (“deepfakes”) to commit fraud or defame individuals; (2) the proliferation of biometric information repositories that create attractive targets for malicious actors; (3) the use of technology to reveal sensitive information about consumers, including information related to health care, religion, or politics; and (4) the potential for technology to incorporate deep biases that manifest differently across demographic groups. 

In light of these perceived risks, the Statement sets out a non-exhaustive list of examples of practices that the FTC will scrutinize going forward in determining whether a company’s use or marketing of biometric information technologies complies with Section 5 of the FTC Act. These include the following:

Deceptive Trade Practice Examples

  • False or unsubstantiated marketing claims relating to the validity, reliability, accuracy, performance, fairness, or efficacy of technologies using biometric information; and
  • Deceptive statements about the collection and use of biometric information.

Unfair Trade Practice Examples

  • Failing to protect consumers’ personal information using reasonable data security practices;
  • Engaging in invasive surveillance, tracking, or collection of sensitive personal information that was concealed from consumers or contrary to their expectations;
  • Implementing privacy-invasive default settings in certain circumstances;
  • Disseminating an inaccurate technology that could endanger consumers;
  • Selling technologies with the potential to facilitate harmful or illegal conduct, and failing to take reasonable measures to prevent such conduct; and
  • Using biometric technology in a discriminatory manner.

In evaluating whether biometric technology violates Section 5, the FTC will take into account factors such as whether the company:

  • Fails to assess foreseeable harms to consumers before collecting biometric information;
  • Fails to promptly address known or foreseeable risks;
  • Engages in surreptitious and unexpected collection or use of biometric information;
  • Fails to evaluate the practices and capabilities of third parties who will operate or be given access to biometric technologies;
  • Fails to evaluate the practices and capabilities of third parties;
  • Fails to provide appropriate training for employees and contractors whose job duties involve interacting with biometric information or biometric technologies; and
  • Fails to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Takeaways

Biometrics are directly regulated in a limited number of locations, including Illinois, Texas, Washington, and New York City. While private biometric privacy litigation has flourished in Illinois, there are few instances of private plaintiffs pursuing companies under other states’ laws for the wrongful collection or mishandling of their biometric information.

The Statement may cause an uptick in biometric privacy litigation nationwide, for two reasons.

First, the FTC’s definition of biometric information is significantly broader than definitions found under state laws that regulate biometric technology. Even in states that already regulate biometric information, there may be new exposure for collecting, possessing, or using data relating to an “identified” individual’s characteristics or traits, even if those characteristics or traits themselves are not unique enough to identify an individual with a high degree of reliability.

Second, while the FTC Act does not provide a private right of action, private litigants may attempt to use the Statement to bring claims under their state’s unfair and deceptive trade practices act.

Companies that use or create biometric-enabled technology should take note of the Statement and evaluate their compliance. Contact the authors with any questions.

On June 14, 2023, European Union (EU) parliament members passed the Artificial Intelligence Act (the “EU AI Act”) which, if enacted, would be one of the first laws passed by a major regulatory body to regulate artificial intelligence.  It would also potentially serve as a model for policymakers here in the United States as Congress grapples with how to regulate artificial intelligence.

The EU AI Act would, among other things, restrict use of facial recognition software and require artificial intelligence developers to disclose details about the data they use with their artificial intelligence-powered software.  Artificial intelligence developers would be required to comply with transparency requirements that would require them to publish summaries of copyrighted materials used in their data sets and incorporate safeguards to prevent the generation of illegal material.  The AI Act would also ban companies from scraping biometric data to include in data sets.

The EU AI Act could have major implications for developers of generative artificial intelligence models.  Sam Altman, the CEO of OpenAI (creator of ChatGPT, DALL-E 2), recently testified before United States lawmakers and global policy makers calling for thoughtful and measured regulation of artificial intelligence. Altman has expressed concerns that the EU’s regulations could be overly restrictive, which may stem from OpenAI’s current policy of keeping its training materials secret.  A final version of the law  is expected to be passed later this year.

Although the European Union does not surpass the United States and China as a major player in artificial intelligence development, Brussels often plays a trend-setting role with regulations that eventually become de facto global standards. So far, the United States has only offered recommendations and guidance through certain federal agencies.  While there has been little effort to enact federal legislation comparable to the AI Act here in the United States, it appears that such regulatory scrutiny is forthcoming.

Indeed, just last month Senate Majority Leader Chuck Schumer met with a bi-partisan group of senators to take the initial steps in crafting legislation to regulate artificial intelligence.  As a result, artificial intelligence providers and consumers need to be very mindful of pending regulations and learn and understand any federal government guidance and/or recommendations.

We have often heard that mantra “digitize to survive.” Businesses initiate a digital transformation to drive growth, improve business processes, and enhance the customer experience. According to Gartner, digital transformation is an organizational priority for 87% of senior executives.

A number of studies from academics, consultants, and analysts indicate anywhere from 70% to 95% of organizations fail to realize the expected business benefits of their digital transformations and ERP software implementations. Some studies found that 70% of digital transformations failed due to employee resistance.

The challenges of successfully implementing ERP software or a having a successful digital transformation are daunting. The Covid-19 Pandemic made things worse.  Because of Covid-19, many digital transformations were either rushed or implemented by teams working remotely. When faced with challenges, companies often cut back on change management, data integration, and training. Paradoxically, these are the exact things that help ensure the success of a digital transformation. 

It is no surprise that over 90% of companies polled in a recent KPMG Global Transformation Study have completed a digital transformation in the past two years, but only 18% of the companies rate their digital technology as effective. More often than not, a failed digital transformation falls within three categories:

  • Underperformance: When the digital transformation is underutilized and there is a lack of focus with investment.
  • Regression: When businesses are under the impression they are transforming, but are actually lagging.
  • New Digital Initiative: When companies aren’t prepared and try to launch a digital initiative, but it fails, and the company is now forced to discontinue the digital initiative.

Eighty-seven percent of companies believe that digital transformation will disrupt their industry but only a mere 44% of the companies are prepared for the potential disruption. Implementing a digital transformation with a focus on the customer experience can increase customer satisfaction by 20-30% and economic gains by 20-50%.  But in order for companies to reap the benefits of a digital transformation, an understanding of best practices is required. Companies should:

  1. Focus on delivering business results, and evaluating businesses goals to prevent disorganization.
  2. Be skeptical of promises that a digital transformation will be the technological equivalent of a silver bullet.
  3. Focus on fundamental business needs and use advanced analytics to mitigate risk.
  4. Focus on hiring the right talent, train your existing staff, and consider hiring talent with experience in digital transformation.

Successful digital transformations are crucial for companies to survive today. At the end of the day, the adoption of new technology is meant to increase efficiencies and drive costs down. Be cautious and aware of signs of failure and take the necessary steps to mitigate digital transformation failure.

On April 20, Taft partner Marcus Harris and associate Nick Brankle provided tips to avoid a digital transformation relationship trainwreck. This webinar included ways to manage risk, spot vendor red flags, avoid litigation, and negotiate software contracts.

To watch a recording of this webinar, click here.

A new, fun, and fast way to generate words and images has exploded in popularity. The hero (or villain, depending on whom you ask) is a high-powered, complex form of computer programming called generative artificial intelligence (AI). OpenAI, a company riding on a multi-billion dollar investment from Microsoft, has popularized generative AI with ChatGPT, a now-viral platform allowing users to generate seemingly anything the mind can imagine in text form. Other companies have created platforms like Midjourney or Adobe Firefly, allowing people to do the same but with images.

Copyright issues surrounding generative AI are unsettled. Courts and the United States Copyright Office are still grappling with issues such as:

  • Is the text or image created by a generative AI platform copyrightable in the first place, or is putting text into the prompt nothing more than an underlying idea?
  • If text or an image is copyrightable, who owns the exclusive rights to the resulting image or text under Copyright law?
  • Can a company using a generative AI platform, like Midjourney, use an image created by the platform in external marketing materials, or does such use expose the company to copyright infringement claims?

Artists, creators, and other stakeholders have put these questions squarely before the copyright office and courts. For example, a group of artists filed a class action against Midjourney and other image-creating generative AI platforms, arguing that those companies’ use of copyright-protected works of art constitutes copyright infringement. The companies have claimed “fair use,” which has a complicated meaning under copyright law and a meaning that the United States Supreme Court may dramatically change this year.

Below are some best practices for using these platforms.

  1. Have a written policy. Given the uncertainties, it is wise to implement an internal-use policy for employees using these platforms to increase productivity.
  2. Read the terms of use or terms of service. The terms of use or terms of service may disclose whether a company assigns the rights to all generated content to the user, like OpenAI’s ChatGPT does. Those terms may also require users to put a warning on generated content, telling viewers that it was generated using AI.
  3. Avoid putting confidential or proprietary information into the prompt because confidentiality is not guaranteed. It is worth repeating, reading the terms of use or terms of service before allowing employees to use generative AI platforms. OpenAI, for example, discloses that it cannot delete prompts entered by users. It also expressly advises users of ChatGPT not to reveal sensitive information and that anything entered into the prompt may be used to train the program later. Although these programs have great potential to solve complex problems and fix bugs in code quickly, it is risky to ask these programs to solve bugs in proprietary code. For example, three Samsung employees reportedly put secret company code into ChatGPT to help them fix a bug.
  4. Avoid putting trademarks, celebrities, or well-known images and characters in the prompt. Popular and well-known images and characters may be subject to copyright protection. Celebrities also tend to protect the rights to use their image and likeness. To avoid generating content that could subject you to copyright infringement allegations, publicity rights claims, and/or trademark infringement claims, avoid putting popular characters, celebrities, and trademarks into the prompt.
  5. Choose platforms wisely, but understand the tradeoffs. Not all companies trained their programs using the same content. Adobe Firefly, for instance, claims to have only trained its image-generating program using licensed content or images in the public domain. Firefly may be a good option for companies trying to minimize infringement allegations, but some users of Firefly beta have commented that limiting the training to licensed and otherwise free-to-use content has come with a creative cost—the quality of the Firefly images may be lower than images generated by platforms that scraped content protected by copyright. Using platforms that scraped all content is not completely unwise, especially if companies stick to number three above and use “reverse-image” searching to see if the AI platform spit out something similar that is protected by copyright. And courts may very well determine in the future that “fair use” applies to AI-generated images, allowing near unfettered use, so staying on top of recent developments is a wise move.
  6. Double check statements of fact. It is prudent to double-check the work of programs like ChatGPT because the information it generates is not always accurate.

If you have questions or need help crafting an internal policy, contact our Copyright team.

Taft Chicago partner Marcus Harris will be a featured speaker for Pemeco Consulting’s webinar, “SaaS Contract Negotiations: A Winning Playbook,” on April 27. The webinar will provide insights on the typical negotiation cycle, key contract terms, how to build a strong bargaining position, and how to master negotiation strategies. 

For more information or to register, click here.

Harris has established one of the country’s leading practices devoted to drafting and negotiating Enterprise Software related license, implementation, and SaaS agreements, as well as litigating failed software implementations in courts and before arbitration panels across the country. He is one of the foremost attorneys in the country representing government entities, distributors, and manufacturers in recovering damages arising from failed Enterprise Resource Planning (ERP) software implementations.