The EU’s pioneering AI Act, set to take effect in two years, aims to establish Europe as a global leader for trustworthy AI. It provides for enforcement of unified rules, emphasizing safety and fundamental rights. And it applies to providers and users globally, so long as the AI output is intended for EU use.

The Act defines an AI system as software that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The Act uses the above definition to categorize systems based on the amount of risk associated with using them, and the corresponding amount of work required to comply with the Act differs greatly depending on the category. For instance, under the Act, low-risk systems face transparency requirements, while high-risk ones must undergo risk assessments, adopt specific governance structures, and ensure cybersecurity. This impacts various sectors, including medical devices, recruitment, HR, and critical infrastructure.

For US businesses relying on general-purpose AI intended for use in the EU, compliance with the AI Act is crucial. They may need to provide technical documentation and summaries about training content. Larger AI systems may undergo additional testing based on size measurements.

While uncertainties surround the Act’s impact on US businesses, proactive measures involve developing and maintaining an AI governance framework. This strategy ensures responsible AI development, deployment, and risk mitigation. Components include creating an AI registry, establishing cross-functional committees, implementing robust policies, and fostering a culture of responsible AI usage. Successful implementation can enhance market share and meet rising expectations for ethical AI practices from customers, partners, and regulators.