Sept. 17, 2024
This is part of a series from Nelson Mullins' AI Task Force. We will continue to provide additional insight on both domestic and international matters across various industries spanning both the public and private sectors.
The Framework Convention on Artificial Intelligence (the “Framework Convention”), the first legally binding international treaty aimed at addressing AI safety, was officially opened for signature on Sept. 5. The Framework Convention applies to both public and private sectors, and it was signed by key global players, including the United States, European Union, and United Kingdom, along with other nations like Israel and Norway. While this is a global initiative, U.S. companies utilizing “artificial intelligence systems” (defined as a machine-based system that infers from the input it receives, for explicit or implicit objectives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments) will need to understand what this agreement means in terms of practical impacts on their operations, especially in terms of compliance, timelines, and enforcement.
Timelines for implementation
The timeline for adopting the requirements of the Framework Convention will vary based on how quickly individual signatory nations, including the U.S., can integrate these provisions into their domestic legal systems. U.S. companies should prepare for a phased approach, where regulatory guidance will likely emerge gradually. However, with increased attention on AI safety, it is reasonable to expect an expedited implementation compared to past international agreements.
Key requirements for U.S. companies
For U.S. companies utilizing artificial intelligence systems, several critical requirements will arise from the Framework Convention’s provisions:
Enforcement and compliance expectations
While the Framework Convention sets the stage for international collaboration on AI safety, enforcement remains in the hands of each country’s regulators. In the U.S., we can expect that the Federal Trade Commission (FTC) will play a significant role in ensuring compliance with these provisions. The FTC has already demonstrated a strong focus on AI and algorithmic transparency. However, enforcement may vary depending on the industry sector. For example, industries like finance, healthcare, and insurance—where artificial intelligence systems can have significant human impacts—are likely to see stringent and faster oversight compared to sectors where AI impacts may be less immediate.
The Framework Convention is a step toward globally regulating artificial intelligence systems. For U.S. companies, the focus should now be on anticipating regulatory changes and preparing to meet the heightened standards of transparency and accountability that will be required.
Follow Nelson Mullins' Idea Exchange for more thought leadership from our AI Task Force or click here to subscribe to emails from the Nelson Mullins AI Task Force blog.
These materials have been prepared for informational purposes only and are not legal advice. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Internet subscribers and online readers should not act upon this information without seeking professional counsel.