On January 7, 2019, the federal Office of Management and Budget (OMB) released a draft of a memorandum setting forth guidance to assist federal agencies in developing regulatory and non-regulatory approaches regarding artificial intelligence (AI). This draft guidance will be available for public comment for sixty days, after which it will be finalized and issued to federal agencies.
According to the draft, the guidance was developed with the intent to reduce barriers to innovation while also balancing privacy and security concerns and respect for IP. The proposed guidance features ten principles to guide regulatory approaches to AI applications. In addition, in what may be a boon for those in the private sector developing AI infrastructure, the OMB reinforces the objective of making federal data and models generally available to the private sector for non-federal use in developing AI systems.
Initial responses to the proposed guidance has been mixed, and it remains to be seen how the principles in the guidance (when finalized) will be put in practice. Notably, however, those who intend to invest significant resources in AI-based infrastructure should be aware of what may prove to be the emerging blueprint for AI regulation in the near future.
The draft builds on: the prior administration’s 2016 AI report and related public workshops on the subject; the President’s February 2019 Executive Order 13859 that prioritizes AI and encourages increased investment and public-private collaboration in AI; the launch of the American AI Initiative, which seeks to sustain America as the global economic and technological leader in AI; the U.S.’s decision to join more than 40 countries in adopting the Organization for Economic Cooperation and Development (OECD)’s global AI principles; and the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) draft plan for federal government engagement in advancing AI standards that was released in August 2019.
The proposed guidance expressly focuses on “weak” AI (i.e., technology that goes beyond conventional computing to learn and perform specialized tasks by extracting information from data sets), and states that “strong” AI, which involves technologies that exhibit awareness and the ability to self-improve its cognitive abilities, is beyond the scope of the guidance. It is also important to note that the draft guidance would not apply to the government’s own use of AI, for uses such as security and law enforcement.
The draft stresses that federal agencies must avoid regulatory or non-regulatory actions that “needlessly hamper AI innovation and growth.” Moreover, the draft states that, in some cases, agencies must “address” (presumably, by preempting) state laws that are inconsistent with national policy. The guidance calls for foregoing new regulations if existing regulations are sufficient or if a national standard for a specific aspect related to AI is “not essential.”
The proposed guidance offers ten principles when considering federal regulation in the area:
- Public trust in AI: Given that AI technology could pose risks to privacy and individual rights, the government should promote “reliable, robust, and trustworthy AI applications,” and respond appropriately to privacy issues based on the “nature of the risk presented and the appropriate mitigations.”
- Public participation: Agencies should promote awareness of standards and inform the public about the technology and encourage public participation in the rulemaking process.
- Scientific integrity: Technological research and data should inform policy decisions in the area. Also, data used to train an AI system “must be of sufficient quality for the intended use.”
- Risk management: Regulatory approaches should use a risk-based approach about the nature of consequences should an AI application fail (or succeed).
- Costs/benefits: Agencies, when filling in gaps in existing law as to the question of responsibility and liability for decisions made by AI, should evaluate the benefits, costs and effects associated with any method of accountability.
- Flexibility: Regulatory approaches should pursue “performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications,” given the need for the law to evolve with the technology.
- Fairness and non-discrimination: Agencies should consider how AI applications may produce discriminatory outcomes.
- Transparency: Agencies should consider the sufficiency of existing law, policy, and regulation before adopting additional measures for disclosure and transparency.
- Safety: To promote AI systems that are safe and secure, agencies should watch for “controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems.”
- Coordination: Agencies should coordinate with each other to advance innovation and appropriate protections, at the same time allowing for sector-specific approaches when appropriate.
Moreover the proposed guidance advocates for non-regulatory approaches to AI, including industry collaboration to generate non-regulatory policies, participation in pilots and experiments, and support for voluntary consensus standards.
As AI continues to influence real-world decision-making – impacting various fields such as employment, finance, e-commerce, and advertising and social media – it’s likely that some existing laws will have to be adapted or re-interpreted and that new regulations will have to be drafted (see e.g., California’s chatbot law). Thus, advances in AI promoted by public investment and private sector research will continue to challenge the law and lawyers going forward. In any case, the final version of this guidance will likely lay the blueprint for future AI regulation, at least under the current administration. It promises to be an interesting journey.