top of page
U.S. Commerce Department Proposes New Reporting Rules for Advanced AI Models and Computing Clusters

Sept 11, 2024

On September 11, 2024, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) proposed a new rule[1], in accordance with directives from the 2023 AI Executive Order[2].  The proposed rule aims to enhance the U.S. government’s insight into the capabilities and security measures surrounding dual-use foundational AI models[3], ensuring that these technologies remain secure and available to support the U.S. defense industrial base.

 

According to the proposed rule, a covered US person [4] will be required to submit a notification to the BIS by emailing on a quarterly basis if the covered U.S. person “engages in, or plans, within six months, to engage in ‘applicable activities.’” . “Applicable activities” include:

 

(i) “Conducting any AI model training run using more than 10^26 computational operations (e.g., integer or floating-point operations);” or

 

(ii) Acquiring, developing, or coming into possession of a computing cluster that has a set of machines transitively connected by data center networking of greater than 300 Gbit/s and having a theoretical maximum greater than 10^20 computational operations (e.g., integer or floating-point operations) per second (OP/s) for AI training, without sparsity.”

 

Once notified, the BIS will follow up with additional questions, which must be answered within 30 days, with any further BIS inquiries to be addressed within 7 days. These questions will focus on key areas outlined in the AI Executive Order, such as dual-use applications, cybersecurity measures for model training, and red-team testing procedures for robustness.

 

The BIS is accepting public comments on the proposed rule until October 11, 2024. Notably, on September 6, 2024, the BIS announced the introduction of new export controls, in collaboration with international partners, on quantum computing technologies, advanced semiconductor manufacturing equipment, and software critical for dual-use AI applications. These controls build on a series of stringent restrictions introduced over the past two years targeting advanced computing, supercomputing equipment, and related components.

 

We will continue to monitor the regulatory landscape as the BIS finalizes its rule. Please feel free to reach out for more personalized advice on how these proposed rules may impact your specific operations.

 

 


 

 

 

_____________________________

[1] See https://www.federalregister.gov/documents/2024/09/11/2024-20529/establishment-of-reporting-requirements-for-the-development-of-advanced-artificial-intelligence

[2] See https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

 

[3] As defined therein, a “dual-use foundation model” is “trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

 

[4] The proposed rule defines “Covered U.S. person” as “any individual U.S. citizen, lawful permanent resident of the United States as defined by the Immigration and Nationality Act, entity—including organizations, companies, and corporations—organized under the laws of the United States or any jurisdiction within the United States (including foreign branches), or any person (individual) located in the United States.”

bottom of page