Exploring US AI Regulatory Headlines from 2023
- Cheryl Johnson
- Dec 28, 2023
- 8 min read

In 2023, the landscape of Artificial Intelligence (AI) applications witnessed an unprecedented expansion globally, marked by a significant increase in the deployment and utilization of AI technologies within American businesses. This rapid integration of new AI software applications has introduced substantial organizational risk. While the United States ("US") has not developed a single comprehensive US AI regulation, numerous frameworks and guidelines exist, many of which were introduced or expanded in 2023. In fact, throughout the year, the AI regulatory scene was abuzz with critical developments, each headline underscoring the urgency for privacy officers, data officers, and executives across the United States to take informed actions. In this article, we delve into the pivotal moments of 2023 AI regulatory developments, offering insights and guidance to navigate the evolving AI legislative environment as we advance into 2024.
While the US does not have a single federal privacy law or AI governance law, many federal and state laws address privacy, and some of those address AI privacy. Privacy professionals are familiar with these provisions included in the FTC Act, Fair Credit Reporting Act, Title VII of the Civil Rights Act, Americans with Disabilities Act, Age Discrimination in Employment Act, Fair Housing Act, among others.
While comprehensive AI laws have yet to be passed in the US, many have been introduced. In the meantime, to address Artificial Intelligence specifically, the Biden administration released the Blueprint for an AI Bill of Rights in October 2022; and updated the National AI Research and Development Strategic Plan in May 2023. In 2018, the Executive Branch issued Executive Order 13859, Maintaining American Leadership in Artificial Intelligence; and in October 2023, it issued Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
In this post, we will discuss the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and the Use of Artificial Intelligence. In addition, we will cover the National Institute of Science and Technology (NIST) AI Risk Management Framework (RMF), released in January 2023, which represents a significant development for managing organizational risk of AI applications.
Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI)
Executive Order 14110, issued on October 30, 2023, focuses on the safe, secure, and trustworthy development and use of Artificial Intelligence (AI). This comprehensive directive outlines a multifaceted approach involving various government agencies and sectors. Here's a summarized overview:
Purpose and Policy (Sections 1 and 2): The order acknowledges the significant potential and risks of AI. It emphasizes responsible AI usage to address challenges and improve prosperity, productivity, innovation, and security. The policy includes a commitment to govern AI development and use based on eight guiding principles, involving government, private sector, academia, and civil society.
Definitions (Section 3): This section provides key definitions, including what constitutes an 'AI system', 'AI model', 'AI red-teaming', and definitions for other relevant terms like 'commercially available information' and 'critical and emerging technologies'.
Ensuring AI Safety and Security (Section 4): The Secretary of Commerce, in coordination with other relevant agencies, is tasked with developing guidelines, standards, and best practices for AI safety and security. This includes establishing consensus industry standards for safe AI systems, developing resources for managing AI risks, and creating evaluation and auditing benchmarks. The Secretary of Energy is directed to develop AI model evaluation tools and testbeds to assess AI capabilities for various security threats. This work involves consultation with private AI labs, academia, civil society, and third-party evaluators.
Promoting Innovation and Competition (Section 5): The order details measures to attract AI talent to the U.S., including streamlining visa processes and developing programs to identify and attract top international AI talent. It also outlines initiatives to clarify and modernize immigration pathways for AI experts and other professionals in critical and emerging technologies.
The Executive Order primarily addresses U.S. federal agencies but indirectly influences private businesses through its policy and principles, safety and security guidelines, and efforts to attract global AI talent. While the direct mandates apply to federal agencies, the broader implications of these policies, such as standards for AI safety and innovation initiatives, will likely impact private businesses in the U.S., especially those involved in AI development and use. The collaborative nature of the order, involving various stakeholders including industry members, suggests an inclusive approach affecting both public and private sectors in the realm of AI regulation and development.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework (RMF), released by NIST in January 2023, provides guidance on managing the risks associated with designing, developing, and deploying AI systems. The RMF is structured into two main parts:
Foundational Information: This section outlines the framing of risks, the audience for the framework, and a detailed discussion on AI risks and trustworthiness, covering aspects like safety, security, accountability, transparency, privacy, and fairness.
Core and Profiles: This part details the core functions of the framework—Govern, Map, Measure, and Manage—and elaborates on their specific categories and subcategories. It also discusses how these functions can be adapted to suit different organizational needs and AI system specifics.
The purpose of the AI Risk Management Framework (AI RMF) is to serve as a resource for organizations engaged in the design, development, deployment, or utilization of AI systems. As AI becomes increasingly prevalent across various industries, this framework holds significant relevance for a wide range of organizations. It aims to provide methodologies that enhance the trustworthiness of AI systems and support the responsible development and use of these technologies. NIST has crafted the AI RMF to be both pragmatic and flexible, enabling it to stay aligned with the continuously advancing field of AI. A key emphasis of the framework is on harnessing the benefits of AI for society while safeguarding against its potential risks.
The "Core" of the AI RMF describes four pivotal elements, each addressing a specific aspect of AI risk management:
Govern: The Govern function emphasizes the establishment of robust governance practices for AI. It involves creating organizational policies and guidelines that govern the ethical, responsible, and compliant use of AI technologies. This step is crucial for ensuring that AI initiatives align with an organization's core values and legal obligations.
Map: The Map function is centered on systematically identifying and documenting the potential risks associated with the use of AI. This involves an in-depth analysis of how AI systems are integrated within the organization, the nature of the data they process, and the potential impact of their decisions. Mapping these elements helps in understanding the broader implications of AI deployment.
Measure: The Measure function focuses on evaluating the identified AI risks are evaluated in terms of their likelihood and potential impact. This assessment is key to quantifying the risks and aids in prioritizing them based on their severity. It serves as a critical step in developing an informed risk management strategy.
Manage: The Manage function involves developing and implementing strategies to manage the prioritized risks. This may include risk mitigation techniques, contingency planning, and establishing protocols for risk monitoring and review. The aim is to ensure that risks are managed in an effective and sustainable manner, aligning with the organization's risk tolerance and strategic goals.
Each of these elements works in concert to provide a comprehensive approach to AI risk management, facilitating a balance between leveraging the benefits of AI technologies and mitigating their potential risks. The AI RMF is designed to be dynamic and adaptable, allowing for flexibility and evolution as AI technologies and their applications continue to advance.
These four functions provide a comprehensive approach to AI risk management, guiding organizations through the process of identifying, assessing, and responding to the unique risks posed by AI systems. The AI RMF is designed to be adaptable, allowing organizations to tailor its implementation to their specific needs and the evolving landscape of AI technology.
How the NIST AI Risk Management Framework and the Executive Order on Safe, Secure, and Trustworthy AI Work Together
The President's Executive Order charged several agencies, including the National Institute of Standards and Technology (NIST), with creating guidelines and best practices to promote the safe, secure, and trustworthy development and use of AI. Some of the items with which NIST was tasked include, among others:
Developing a companion resource to the NIST AI Risk Management Framework focused on generative AI;
Developing a companion resource to the NIST Secure Software Development Framework for development practices for generative AI;
Launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities;
Establishing guidelines and processes to enable developers of generative AI to deploy safe, secure, and trustworthy systems;
Indeed, NIST's work is well underway in these and other areas. In June 2023, NIST created the Generative AI Public Working Group; an example of its engagement with the private sector, academia, and civil society as it produces guidance and processes required by the Executive Order.
Furthermore, in mid-December, 2023, NIST generated draft guidance on evaluating the privacy protection technique of "differential privacy." In its simplest terms, differential privacy provides a mathematical approach to privacy protection. It is a mathematical algorithm that allows data from a database to be publicly released while providing strong privacy protection for individuals whose information is included in that database.
Differential privacy is a privacy-enhancing technology (PET) used in data analytics. A lack of standards, however, makes it difficult to employ effectively. The Executive Order on AI mandated the creation of guidelines to evaluate the efficacy of differential privacy guarantee protections for AI. In response, in mid-December 2023, NIST introduced Draft NIST Special Publication (SP) 800-226, titled "Guidelines for Evaluating Differential Privacy Guarantees." While the guidance is designed primarily for federal agencies, its utility extends far beyond federal agencies. This comprehensive document is a valuable resource for software developers, business leaders, and policy makers. It is designed to offer clear guidance on implementing differential privacy with confidence. By providing a standardized framework, NIST's publication empowers software developers and business stakeholders across various sectors to approach differential privacy with greater confidence and consistency.
Predictions for 2024
As we stand at the threshold of 2024, the trajectory of AI regulatory developments in the United States, as witnessed in 2023, sets a dynamic and promising path forward. The groundbreaking efforts by NIST and the federal government, particularly through Executive Order 14110 and the AI Risk Management Framework, have laid a solid foundation for the safe, secure, and trustworthy evolution of AI technologies.
These initiatives are not just regulatory mandates; they symbolize a commitment to fostering a balanced ecosystem where innovation thrives alongside robust privacy and security standards. As such, we may anticipate a year of continued progress, where the principles set forth in these documents begin to manifest more concretely in both policy and practice.
Expect to hear more about the technology behind AI as discussions expand around transparency and other Responsible AI concepts. Accordingly, phrases like privacy-enhancing technology (PET), discussed above, may become more mainstream. The dialogue will extend beyond privacy and security, however, encompassing issues like AI bias, transparency, and accountability. This broader regulatory scope will be vital in building public trust in AI systems and ensuring they are used in a manner that benefits society as a whole.
The conversation around AI Governance is likely to intensify in 2024. As AI technologies become more deeply integrated into the fabric of everyday life, the need for clear, enforceable regulations will become increasingly apparent. This could potentially lead to the introduction of more comprehensive AI laws, both at the federal and state levels.
Finally, as we progress into 2024, we must acknowledge the unpredictable nature of technology and regulatory landscapes. While we can make educated predictions based on current trends, the field of AI is evolving at a rapid pace, often in unexpected directions.
This unpredictability necessitates a flexible and adaptive approach to AI governance; one that can swiftly respond to new challenges and opportunities as they arise. Many of the headlines likely will announce valuable initiatives by agencies such as NIST, that provide more guidance and frameworks for implementing safe and trustworthy AI systems. The ongoing dialogue between different stakeholders, continuous monitoring of AI advancements, and a willingness to evolve and update regulatory frameworks will be key to navigating the unpredictable yet exciting journey of AI development in 2024 and beyond.



Comments