Using biometric identifiers and information continues to be a “go-to” method for authentication. Likewise, the Illinois Biometric Information Privacy Act (BIPA) continues to garner attention throughout Illinois and nationally as other states adopt differing biometric privacy laws. In response to growing public concern about the increased commercial use of biometric data, the Illinois General Assembly enacted BIPA in 2008. BIPA regulates collecting, using, and storing biometric information, such as fingerprints, retina scans, and facial recognition data. The goal of BIPA is to prevent the misuse of biometric identifiers. BIPA provides a private right of action and remedies for violations. It requires organizations to obtain informed consent before collecting biometric data and imposes specific obligations on handling such information. BIPA class actions have steadily gained traction and headlines since around 2019 when the Illinois Supreme Court (Rosenbach v. Six Flags, 2019 IL 123186) held no actual injury was required to state a claim.

In the past year and a half, the landscape of BIPA litigation experienced significant milestones. Notably, the first-ever BIPA trial resulted in a staggering $228 million judgment for 45,600 violations (one per class member). In 2023, the jury award in this case was set aside and a new trial was scheduled on the issue of damages due to an Illinois Supreme Court decision finding damages to be discretionary. Over the last several months, that new trial was vacated, and the parties reached a $75 million class action settlement, subject to the court’s approval. See Rogers v. BNSF Railway Co., No. 19-cv-03083 (N.D. Ill. Feb. 26, 2024). Despite this landmark verdict and subsequent settlement, not all decisions have been unfavorable for BIPA defendants. One crucial decision was the recognition that virtual try-on tools, like those used for trying on glasses and makeup, may be exempt from BIPA under the “health care exemption” (Svoboda v. Frames for America, Inc., 21-C-5509, 2022 WL 4109719 (N.D. Ill. Sept. 8, 2022)). Another noteworthy decision held that universities, such as DePaul University, could be exempt from BIPA under the “financial institution exemption” (Powell v. DuPaul Univ., No. 21-cv-3001, 2022 WL 16715887 (N.D. Ill. Dec. 6, 2022)). Additionally, a court ruled that a business might not be liable under BIPA if there is no acquisition of biometric data within Illinois (Vance v. Microsoft Corp., 20-1082, 2022 WL 9983879 (W.D. Wash. Oct. 17, 2022)).

In 2023, the Illinois Supreme Court, issued several landmark decisions shaping BIPA’s future. In a highly anticipated decision, the Illinois Supreme Court in Tims v. Blackhorse Carriers, Inc., 2023 IL 127801 (Feb. 2, 2023), resolved longstanding uncertainty about the statute of limitations under BIPA in finding it to be a 5-year limitation period.  In perhaps the most critical BIPA opinion of 2023, in Cothron v. White Castle System, Inc., 2023 IL 128004 (Feb. 17, 2023), the court held that a claim under BIPA accrues every time a person’s biometric data is scanned or transmitted without prior consent. Cothron drastically changed the breadth of statutory damages sought by plaintiffs, some of which seek millions of dollars for a single plaintiff who used a fingerprint scanning device. In Mosby v. Ingalls Memorial Hospital (Nov. 30, 2023), the Illinois Supreme Court overturned a lower state appeals court. It ruled that an exclusion in the state’s biometric law applies to health care workers using a medication dispensing device. Another emerging issue in BIPA litigation has become the scope of insurance coverage available to cover violations of BIPA, which is unclear because of the December 2023 decision in Nat’l Fire Ins. Co. of Hartford v. Visual Pak Co., 2023 IL App (1st) 221160, which found the insurer had no duty to defend its insured. The Visual Pak ruling by the Illinois appellate court held opposite to that of the federal court of appeals, which interpreted a similar insurance policy exclusion in Citizens Ins. Co. of Am. v. Wynndalco Enters., LLC, 70 F.4th 987 (7th Cir. 2023). Notably, the insured in Visual Pak filed a petition for leave to appeal in the Illinois Supreme Court earlier this month. Meanwhile, one federal district court has already followed the Visual Pak ruling because it was issued by an Illinois appellate court which is considered to be the best indicia of state law in the absence of a state supreme court decision, and the district court found Wynndalco distinguishable. See Citizens Ins. Co. of Am. v. Mullins Food Products, Inc., No. 22-cv-1334 (N.D. Ill. Feb. 27. 2024). These decisions create tension between the federal and state court’s interpretation of the scope of insurance. They will likely result in further clarification from either or both the Illinois Supreme Court and Seventh Circuit Court of Appeals.

These decisions will collectively shape the landscape for defending and prosecuting BIPA actions in the coming years. In 2024, the landscape of BIPA is expected to continue evolving, influenced by recent key decisions and regulatory developments. Expectations for future BIPA litigation involve an increase in cases as plaintiffs’ attorneys explore the boundaries of the law, its exemptions, and the scope of available damages. Voice recognition cases, like Wilcosky v. Amazon.com Inc., No. 19-CV-5061 (N.D. Ill. Nov. 1, 2023), and other similar cases, are likely to clarify further what is really necessary to identify a person to establish a BIPA violation. Notably, late last year, the court in Wilcosky denied Amazon’s motion to dismiss, focused primarily on Amazon’s arguments relating to the plaintiffs’ Section 15(b) claim challenging the sufficiency of the disclosures provided in the notice to plaintiffs before the collection of their purported voiceprints.

Beyond BIPA, businesses should remain vigilant about federal regulations, especially after the Federal Trade Commission (FTC) released a policy statement in mid-2023. The FTC’s clarification of what constitutes biometric information and the expectations for businesses collecting and using such data aims to prevent “unfair” or “deceptive” practices. The policy statement provides examples of how businesses should analyze practices, including discussions on potential harms and ongoing due diligence for biometric information service providers. As businesses navigate this evolving landscape of biometric data regulations at the state and federal levels, clear and accurate public-facing notices regarding biometric information practices will be crucial for compliance and transparency, whether directed to employees or consumers impacted by the technologies in use.

Major Portions Go Into Effect March 31, 2024, Including Its Private Right of Action

Washington’s “My Health, My Data Act” is a new data privacy statute that regulates the collection, sharing, selling, and processing of “consumer health data” by certain entities. The act is intended to protect health data not otherwise protected by federal health care privacy regulations, such as entities not regulated by HIPAA. Despite its name, the act broadly encompasses the regulation of personal data beyond traditional health care data. In certain circumstances, it regulates the collection of non-Washington residents’ data and businesses outside of Washington. “Regulated entities,” as defined by the act, must comply with its obligations beginning March 31, 2024, while “small businesses” have until June 30, 2024. The act will have wide-reaching ramifications for businesses due to the expansive scope of the act’s coverage and enforcement through a private right of action.

Individuals Covered and Entities Regulated

The act defines key terms as follows: A “consumer” includes Washington residents, as well as any individual whose data is collected in Washington. “Collect” means “to buy, rent, access, retain, receive, acquire, infer, derive, or otherwise process consumer health data in any manner.” This broad definition will likely become the subject of litigation and eventual judicial interpretation. Notably, the definition of “process” is sweeping and ambiguous. It is “any operation or set of operations performed on consumer health data,” which does not provide clarity as to the types of actions included. Therefore, businesses should assume that any interaction with consumer health data in Washington or from Washington’s residents may be subject to the act until the courts provide greater clarity. The definition of consumer does not cover an individual acting in an employment context, so employee data is excluded from the act.

The act applies to “regulated entities” and “small business” entities. Regulated entities include those that conduct business in Washington as well as entities that produce or provide products or services that target consumers in Washington. Small businesses are generally entities that collect consumer health data of fewer than 100,000 consumers per year. Other than the required implementation dates, the obligations under the act for regulated entities and small businesses are the same. The act does not apply to government agencies or providers that process consumer health data on behalf of government agencies. The act does not otherwise limit the types of entities subject to its provisions, such as based on an entity’s revenue or non-profit status.

Scope of Protected Information

The act broadly defines the term “consumer health data.” It includes “personal information that is linked or reasonably linkable to a consumer and that identifies the consumer’s past, present, or future physical or mental health status.” A non-exhaustive list of categories that qualify as “physical or mental health status” include:

  • General health data such as individual health conditions, treatment, diseases, or diagnosis;
  • Social, psychological, behavioral, and medical interventions;
  • Health-related surgeries or procedures;
  • Use or purchase of prescribed medication;
  • Bodily functions, vital signs, and symptoms;
  • Diagnoses or diagnostic testing, treatment, or medication;
  • Gender-affirming care information;
  • Reproductive or sexual health information;
  • Biometric data, such as imagery of a retina, iris, fingerprint, hand, and face, and voice recordings;
  • Genetic data, such as DNA data;
  • Precise location information reasonably indicating a consumer’s attempt to receive health services or supplies; and
  • Data identifying a consumer as seeking health care services.

Some exclusions apply to the data protected under the law. For example, deidentified data that cannot reasonably be linked to a particular individual and publicly available information are excluded from the definition of personal information. Further, the statute does not regulate data protected by certain federal laws.

Requirements for the Collection, Sharing, Sale, and Processing of Consumer Health Information

Like many data privacy laws enacted in recent years, the act includes various notice, consent, and security requirements. Washington’s act imposes the following obligations:

  • Data Privacy Policy: Entities must maintain a detailed consumer health data privacy policy that is linked on their homepage. The policy must disclose (1) categories of health data collected and shared and the sources from which it is collected; (2) the use for the collected data; (3) categories of third parties or affiliates that will receive the data; and (4) how consumers can exercise their rights in accordance with the act.
  • Consent: There are various situations in which consent is required under the act. Entities cannot collect consumer health data unless they obtain consent for the specific purpose of the collection. Consent is not required if the collection is necessary to provide a product or service the consumer requested. Consumer health data cannot be shared or sold without obtaining separate consent from the consumer. Additionally, the act requires certain disclosures depending on the consent required. Separate consents are required for the collection, sharing, and sale of data.
  • Right to Access and Deletion: Similar to other data privacy laws, consumers have the right to access their data and request its deletion. If a consumer requests deletion, the regulated entity must delete the data from their records, archives, and backups. The entity must notify all affiliates, processors, contractors, and third parties that the data must be deleted.
  • Data Security: Among other requirements, entities must restrict internal access to consumer health data to only individuals who require access for the purposes described in the consent obtained. Entities must establish, implement, and maintain various data security practices that meet the reasonable standard of care within the entity’s industry.
  • Relationship with Data Processors: Notably, the act requires a binding contract between an entity and a data processor that processes consumer health data on the entity’s behalf. The contract must set forth various obligations and limitations on the processor. If the processor fails to comply, it becomes a “regulated entity” subject to all the requirements in the act. The act has no limitation on the location of the processor for it to be subject to the requirements in the act. It also prohibits an entity regulated by the act from contracting with a data processor to process consumer health data in a manner that is inconsistent with the act’s requirements.
  • Prohibits Geofencing: A “geofence” is technology that creates a virtual boundary around a physical location or locates a consumer within a virtual boundary using data such as GPS, cell tower, and Wi-Fi data. The act makes it unlawful for any person to implement a geofence around an entity that provides in-person health care services if the geofence is used to (1) track or identify consumers seeking health care services; (2) collect health data from consumers; or (3) send messaging or advertisements to consumers related to health data or services. The ban on geofencing has no implementation date and, therefore, should be presumed to already be enforceable.

Enforcement

Washington’s Attorney General can enforce violations of the act through Washington’s Consumer Protection Act. More importantly, the Consumer Protection Act also gives consumers a private right of action for violations of the My Health, My Data Act to seek injunctive relief, damages for violations, and attorneys’ fees. This will undoubtedly lead to class action filings. Businesses familiar with operating in Illinois, however, should note that the act differs from Illinois’ Biometric Information Privacy Act (BIPA), which regulates the collection of biometric information. BIPA allows an individual to obtain statutory damages of $1,000 or $5,000 per violation without showing actual harm — making it particularly susceptible to class actions. By contrast, the act is enforced through Washington’s Consumer Protection Act, which requires consumers to show they suffered injuries to their businesses or property and sets recovery at an amount equal to those injuries.

Takeaway

As discussed above, this act is complex in its scope and application. Exceptions exist, such as for data already regulated by federal laws and employee data. However, if the act follows the development of Illinois’ BIPA, plaintiffs will quickly and repeatedly test the boundaries of the private right of action. The courts will likely take some time to interpret the act through litigation by private plaintiffs and enforcement actions by the state attorney general. As seen with BIPA, litigants will likely also test the bounds of who may be sued under the act, regardless of whether the entity initially collects the data—for example, against technology companies that host, store, and process data in Washington.

Any entity that believes it may “collect” any of the types of data qualifying as “consumer health data” should immediately review its policies and processes to ensure compliance with the actAny entity that provides services to an entity regulated under the act should also review its service provider contracts to determine whether it needs additional terms to comply with the act. The act should also serve as a reminder to businesses outside of its scope that state legislatures around the country will continue to consider legislation directed at the collection and use of personal data. Businesses should remain vigilant as they implement new technologies that collect or use personal data and consider consulting legal counsel before undertaking costly implementations.

In recent years, Illinois has become a focal point for privacy litigation, thanks in large part to the Biometric Information Privacy Act (BIPA), which has been the subject of numerous class action lawsuits. However, another Illinois privacy law, the Genetic Information Privacy Act (GIPA), has begun to attract attention from plaintiffs’ attorneys, raising concerns for employers across the state.

Enacted in 1998, GIPA, in part, regulates the collection and use of genetic information by employers in Illinois. Genetic information, as defined by GIPA, includes details from genetic tests, the presence of diseases or disorders, and genetic services. The law prohibits employers from soliciting, requesting, or requiring genetic information as a condition of employment or during the pre-employment process.

While GIPA has been on the books for over two decades, it has only recently become the target of litigation. In 2023, Plaintiffs’ attorneys filed a significant number of class action lawsuits against employers across various industries, alleging violations of GIPA. These cases often involve employers,  or an entity on their behalf, requesting family medical histories or conducting pre-employment physical examinations that allegedly touch upon genetic information.

One of the reasons for the surge in GIPA litigation is the potential for significant damages. Violations of GIPA can result in statutory penalties ranging from $2,500 to $15,000 per violation, depending on the level of negligence or intent. Moreover, prevailing parties may also be entitled to injunctive relief and attorney fees. While GIPA cases are still in their early stages, parallels can be drawn to the interpretation of BIPA by Illinois courts based on similar statutory language in the provisions describing the private right of action and recoverable damages. It is well-settled that a BIPA plaintiff need not allege any actual injury or harm to be deemed “aggrieved” under the act and recover liquidated or actual damages. It remains to be seen whether GIPA will be similarly interpreted, but it is likely given the similar language and at least one court has already made such a finding. See Bridges v. Blackstone, Inc., 2022 WL 2643968, at *3 (S.D. Ill. July 8, 2022) Employers should remain vigilant and monitor legal developments closely as the interpretation of GIPA continues to evolve.

In light of the potential risks associated with GIPA compliance, Illinois employers must take proactive steps to ensure adherence to the law. Here are some practical measures to consider:

  • Review Policies and Practices: Employers should review their policies and practices regarding the collection and use of genetic information. This includes evaluating pre-employment questionnaires, wellness programs, and any other processes that may involve genetic data.
  • Contractual Obligations: Employers who outsource services such as physical examinations should review their contracts to ensure compliance with GIPA. Additionally, insurance policies should be examined to determine coverage for potential litigation arising from GIPA violations.
  • Stay Informed: Given the evolving legal landscape surrounding GIPA, employers should stay informed about developments in case law and seek legal guidance as needed to ensure ongoing compliance.

As GIPA litigation continues to gain momentum, Illinois employers must prioritize compliance with this complex privacy law. By understanding their obligations under GIPA, implementing appropriate policies and practices, and staying informed about legal developments, employers can mitigate the risk of costly litigation and protect the privacy rights of their employees.

The EU’s pioneering AI Act, set to take effect in two years, aims to establish Europe as a global leader for trustworthy AI. It provides for enforcement of unified rules, emphasizing safety and fundamental rights. And it applies to providers and users globally, so long as the AI output is intended for EU use.

The Act defines an AI system as software that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The Act uses the above definition to categorize systems based on the amount of risk associated with using them, and the corresponding amount of work required to comply with the Act differs greatly depending on the category. For instance, under the Act, low-risk systems face transparency requirements, while high-risk ones must undergo risk assessments, adopt specific governance structures, and ensure cybersecurity. This impacts various sectors, including medical devices, recruitment, HR, and critical infrastructure.

For US businesses relying on general-purpose AI intended for use in the EU, compliance with the AI Act is crucial. They may need to provide technical documentation and summaries about training content. Larger AI systems may undergo additional testing based on size measurements.

While uncertainties surround the Act’s impact on US businesses, proactive measures involve developing and maintaining an AI governance framework. This strategy ensures responsible AI development, deployment, and risk mitigation. Components include creating an AI registry, establishing cross-functional committees, implementing robust policies, and fostering a culture of responsible AI usage. Successful implementation can enhance market share and meet rising expectations for ethical AI practices from customers, partners, and regulators.

An owner of a trade secret that has been misappropriated may seek remedies of injunctive relief and monetary damages, to compensate it for the economic harm  resulting from the party that stole and benefitted from the theft of the trade secret. While injunctive relief is the gravamen of any trade secret misappropriation claim, the available monetary damages often drive litigation strategy and often drive a plaintiff’s business, commercial, or market response to the misappropriation.

The trade secret owner may seek injunctive relief and monetary damages under the Uniform Trade Secrets Act (“USTA”), and if applicable, the federal Defend Trade Secrets Act (“DTSA,” enacted 2016), or the Economic Espionage Act (“EEA,” enacted 1996) that criminalized a theft or misappropriation of trade secrets to benefit a foreign government, or agent. Forty nine states, the District of Columbia, and the United States Territories of Puerto Rico and the Virgin Islands have adopted the UTSA (the state of New York remains the sole holdout. Remedies vary and depend upon specific language of the state’s version of the UTSA.

Consequently, rules governing the recovery of monetary damages are not uniform across the fifty two jurisdictions and can be difficult to apply. Here, we discuss monetary remedies under the UTSA and the DTSA. To ensure fairness of the monetary relief awarded to plaintiff – without interfering with lawful competition by the defendant – courts have identified the following categories of recoverable damages, and utilize one or more of these categories to arrive at the total amount awarded to the plaintiff trade secret owner:

  1. Actual Loss. Pecuniary losses recoverable by plaintiff include lost profits, including lost sales to customers that were diverted to the defendant, price erosion, and increased costs, and loss in value of the trade secret caused by defendant’s misappropriation.
  2. Unjust Enrichment. Plaintiff is entitled to recover defendant’s net profits attributable to use of the misappropriated trade secret. Plaintiff has the burden of proof that defendant profited from sales and/or other improper use of plaintiff’s trade secret, and defendant has the burden of proof for claims that its profits, or a(ny) portion thereof, were not attributable to use of plaintiff’s trade secret. A court may also consider:  costs incurred by plaintiff in the development of its trade secret; costs defendant ‘saved’ or did not incur, but nevertheless benefitted from, by misappropriating the trade secret developed by plaintiff; and defendant’s profits during the ‘lead time’ or ‘head start’ defendant gained in developing its competitive product or business.
  3. Reasonable Royalty. A reasonable royalty is the payment for a hypothetical license, that plaintiff and defendant would have negotiated at the time the defendant’s improper use of plaintiff’s trade secret began, and continuing for the duration of defendant’s use. This hypothetical negotiation scenario may rely on a calculation of the “fair market value” for the license to use the trade secret, and/or comparable licenses plaintiff may have negotiated with other parties over the relevant time period.
  4. Exemplary Damages. If defendant’s conduct in the misappropriation of plaintiff’s trade secret is willful, malicious, or in bad faith, the court may award:  (a) an additional amount not exceeding twice any award made to fairly compensate the trade secret owner for actual losses and defendant’s unjust enrichment resulting from misappropriation; and (b) attorneys’ fees.  

Comparatively, the New York statute governing trade secrets claims allows for money damages for plaintiff’s actual lost profits (defendant’s gains/profits that are not plaintiff’s actual losses are not considered), and unjust enrichment (except defendant’s ‘saved’ or avoided development costs), other costs and attorneys’ fees, and in egregious circumstances such as willful and malicious misappropriation, punitive damages (i.e., exemplary damages) in addition to damages awarded for economic harms suffered by plaintiff. New York law also allows the owner of the trade secret to seek remedies, including monetary awards, for claims of breach of contract, unfair competition, breach of fiduciary duty, the criminal offense of larceny.

The trade secret owner, as the plaintiff, must decide which specific claims to assert, and often, the venue in which they proceed. Recovery of monetary damages, and the trade secret owner’s business interests, rest upon timely decisions:  a close reading of the version of the UTSA adopted by the state in which the claims may proceed; consideration of applicable case law in that jurisdiction; potential applicability of the federal DTSA; and even the potential applicability of the EEA. Familiarity with all available options is key when time is of the essence.

On July 26, Taft partner Marcus Harris and attorney O. Joseph Balthazor Jr., offered best practices for companies using generative AI for business purposes. This webinar explored how business are using generative AI now; legal issues surrounding generative AI; regulations in place for generative AI; and more.

To watch a recording of this webinar, click here.

On May 18, 2023, the Federal Trade Commission (the “FTC”) issued a policy statement on the use of biometric information under its regulatory powers in Section 5 of the FTC Act (the “Statement”). The Statement is the strongest message the FTC has ever issued regarding how certain uses of biometric technology may, depending on the circumstances, constitute unfair and deceptive trade practices under Section 5.

The Statement provides significant insight into the FTC’s shifting priorities and focus on the regulation of the use of biometric technology, a topic that so far has been regulated by state and local law – or not at all. Companies should take heed of the FTC’s guidance for purposes of understanding potential exposure not only at the federal and state regulatory level but also in the form of potential civil lawsuits under state unfair and deceptive trade practice statutes.

The Statement

In the Statement, the FTC stated that it is committed to “combatting unfair or deceptive acts related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.” The FTC defined “biometric information” broadly, including “data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” This includes, but is not limited to, “depictions images, descriptions, or recordings of an individual’s facial features, iris or retina, finger or handprints, voice, genetics, or characteristic movements or gestures (e.g., gait or typing pattern).”

The Statement recognizes several scenarios where the use of biometric technology provides “new and increasing risks.” These include (1) the use of biometric information to create counterfeit videos or recordings (“deepfakes”) to commit fraud or defame individuals; (2) the proliferation of biometric information repositories that create attractive targets for malicious actors; (3) the use of technology to reveal sensitive information about consumers, including information related to health care, religion, or politics; and (4) the potential for technology to incorporate deep biases that manifest differently across demographic groups. 

In light of these perceived risks, the Statement sets out a non-exhaustive list of examples of practices that the FTC will scrutinize going forward in determining whether a company’s use or marketing of biometric information technologies complies with Section 5 of the FTC Act. These include the following:

Deceptive Trade Practice Examples

  • False or unsubstantiated marketing claims relating to the validity, reliability, accuracy, performance, fairness, or efficacy of technologies using biometric information; and
  • Deceptive statements about the collection and use of biometric information.

Unfair Trade Practice Examples

  • Failing to protect consumers’ personal information using reasonable data security practices;
  • Engaging in invasive surveillance, tracking, or collection of sensitive personal information that was concealed from consumers or contrary to their expectations;
  • Implementing privacy-invasive default settings in certain circumstances;
  • Disseminating an inaccurate technology that could endanger consumers;
  • Selling technologies with the potential to facilitate harmful or illegal conduct, and failing to take reasonable measures to prevent such conduct; and
  • Using biometric technology in a discriminatory manner.

In evaluating whether biometric technology violates Section 5, the FTC will take into account factors such as whether the company:

  • Fails to assess foreseeable harms to consumers before collecting biometric information;
  • Fails to promptly address known or foreseeable risks;
  • Engages in surreptitious and unexpected collection or use of biometric information;
  • Fails to evaluate the practices and capabilities of third parties who will operate or be given access to biometric technologies;
  • Fails to evaluate the practices and capabilities of third parties;
  • Fails to provide appropriate training for employees and contractors whose job duties involve interacting with biometric information or biometric technologies; and
  • Fails to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Takeaways

Biometrics are directly regulated in a limited number of locations, including Illinois, Texas, Washington, and New York City. While private biometric privacy litigation has flourished in Illinois, there are few instances of private plaintiffs pursuing companies under other states’ laws for the wrongful collection or mishandling of their biometric information.

The Statement may cause an uptick in biometric privacy litigation nationwide, for two reasons.

First, the FTC’s definition of biometric information is significantly broader than definitions found under state laws that regulate biometric technology. Even in states that already regulate biometric information, there may be new exposure for collecting, possessing, or using data relating to an “identified” individual’s characteristics or traits, even if those characteristics or traits themselves are not unique enough to identify an individual with a high degree of reliability.

Second, while the FTC Act does not provide a private right of action, private litigants may attempt to use the Statement to bring claims under their state’s unfair and deceptive trade practices act.

Companies that use or create biometric-enabled technology should take note of the Statement and evaluate their compliance. Contact the authors with any questions.

On June 14, 2023, European Union (EU) parliament members passed the Artificial Intelligence Act (the “EU AI Act”) which, if enacted, would be one of the first laws passed by a major regulatory body to regulate artificial intelligence.  It would also potentially serve as a model for policymakers here in the United States as Congress grapples with how to regulate artificial intelligence.

The EU AI Act would, among other things, restrict use of facial recognition software and require artificial intelligence developers to disclose details about the data they use with their artificial intelligence-powered software.  Artificial intelligence developers would be required to comply with transparency requirements that would require them to publish summaries of copyrighted materials used in their data sets and incorporate safeguards to prevent the generation of illegal material.  The AI Act would also ban companies from scraping biometric data to include in data sets.

The EU AI Act could have major implications for developers of generative artificial intelligence models.  Sam Altman, the CEO of OpenAI (creator of ChatGPT, DALL-E 2), recently testified before United States lawmakers and global policy makers calling for thoughtful and measured regulation of artificial intelligence. Altman has expressed concerns that the EU’s regulations could be overly restrictive, which may stem from OpenAI’s current policy of keeping its training materials secret.  A final version of the law  is expected to be passed later this year.

Although the European Union does not surpass the United States and China as a major player in artificial intelligence development, Brussels often plays a trend-setting role with regulations that eventually become de facto global standards. So far, the United States has only offered recommendations and guidance through certain federal agencies.  While there has been little effort to enact federal legislation comparable to the AI Act here in the United States, it appears that such regulatory scrutiny is forthcoming.

Indeed, just last month Senate Majority Leader Chuck Schumer met with a bi-partisan group of senators to take the initial steps in crafting legislation to regulate artificial intelligence.  As a result, artificial intelligence providers and consumers need to be very mindful of pending regulations and learn and understand any federal government guidance and/or recommendations.

We have often heard that mantra “digitize to survive.” Businesses initiate a digital transformation to drive growth, improve business processes, and enhance the customer experience. According to Gartner, digital transformation is an organizational priority for 87% of senior executives.

A number of studies from academics, consultants, and analysts indicate anywhere from 70% to 95% of organizations fail to realize the expected business benefits of their digital transformations and ERP software implementations. Some studies found that 70% of digital transformations failed due to employee resistance.

The challenges of successfully implementing ERP software or a having a successful digital transformation are daunting. The Covid-19 Pandemic made things worse.  Because of Covid-19, many digital transformations were either rushed or implemented by teams working remotely. When faced with challenges, companies often cut back on change management, data integration, and training. Paradoxically, these are the exact things that help ensure the success of a digital transformation. 

It is no surprise that over 90% of companies polled in a recent KPMG Global Transformation Study have completed a digital transformation in the past two years, but only 18% of the companies rate their digital technology as effective. More often than not, a failed digital transformation falls within three categories:

  • Underperformance: When the digital transformation is underutilized and there is a lack of focus with investment.
  • Regression: When businesses are under the impression they are transforming, but are actually lagging.
  • New Digital Initiative: When companies aren’t prepared and try to launch a digital initiative, but it fails, and the company is now forced to discontinue the digital initiative.

Eighty-seven percent of companies believe that digital transformation will disrupt their industry but only a mere 44% of the companies are prepared for the potential disruption. Implementing a digital transformation with a focus on the customer experience can increase customer satisfaction by 20-30% and economic gains by 20-50%.  But in order for companies to reap the benefits of a digital transformation, an understanding of best practices is required. Companies should:

  1. Focus on delivering business results, and evaluating businesses goals to prevent disorganization.
  2. Be skeptical of promises that a digital transformation will be the technological equivalent of a silver bullet.
  3. Focus on fundamental business needs and use advanced analytics to mitigate risk.
  4. Focus on hiring the right talent, train your existing staff, and consider hiring talent with experience in digital transformation.

Successful digital transformations are crucial for companies to survive today. At the end of the day, the adoption of new technology is meant to increase efficiencies and drive costs down. Be cautious and aware of signs of failure and take the necessary steps to mitigate digital transformation failure.

On April 20, Taft partner Marcus Harris and associate Nick Brankle provided tips to avoid a digital transformation relationship trainwreck. This webinar included ways to manage risk, spot vendor red flags, avoid litigation, and negotiate software contracts.

To watch a recording of this webinar, click here.