Navigating a Complex, Rapidly Evolving Landscape
Rapid advancements in AI—from facial recognition to synthetically generated images—continue to soar while the global race to harness and regulate the technology struggles to keep up. Numerous entities are vying to establish national, international, and industry-wide AI regulations that balance innovation with ethical considerations and social impact.
The implications of AI regulation for organizations with a public policy focus involve potential impacts on operations, innovation, and ethical considerations. Getting that balance right is going to take time, and having agile processes in place that can change quickly will be pivotal to being able to adjust as quickly as the technology evolves.
“AI regulation faces the same challenges as internet regulation when it comes to safety, security, and trust. We’re always going to be playing catch-up”, says Ryan Raybould, NJI Vice President of interactive Media. “However, when paired with big-data and ownership of that data, AI and machine learning pose an even greater threat that is going to be even more difficult to decipher – and with a need to do it in real-time.
Designed to prioritize fair, inclusive systems, AI regulations can mitigate risks of bias, privacy violations, security vulnerabilities, copyright violations, job displacement, and other concerns. As these guardrails are developed, staying updated will be essential to ensure compliance—and to maintain a competitive advantage.
AI policy regulations are the compass for navigating a world driven by artificial intelligence. With over 50% of companies leveraging AI capabilities to drive data-driven insights and increase efficiency and productivity, global leaders are racing to generate national, international, and industry-wide regulation.
The U.S. has taken the most hands-off approach to regulation compared to its international counterparts. While the European Union’s pioneering AI Act offers a comprehensive classification of risks and legal requirements, the U.S. is lagging behind with a year-old AI Blueprint, industry-led voluntary commitments, and multiple initiatives still in development. Meanwhile, China has released interim licensing and security measures for generative AI, emphasizing economic dynamism and adherence to socialist values.
These divergent models reflect the complex global landscape of AI regulation and its implications for individuals, companies, and governments. Adding to the challenge is the complex nature of machine-learning models, which makes it difficult to enforce regulations, especially in the areas of explainability and transparency.
NJI actively advises clients on AI use, both to increase their understanding of it and to develop or refine their position on the use of AI by their organization. To dig a little deeper, read on for a primer of AI regulation efforts as they currently stand and feel free to reach out.
A Comparative Lens of AI Regulation
As of August 18, 2023
Industry-Led
Voluntary Commitment to Safety, Security, and Trust
On July 21, leading AI companies including Amazon, Google, Microsoft, and OpenAI signed a White House pledge to promote responsible AI development. Their commitments encompass rigorous safety testing, cybersecurity measures, transparency about AI capabilities and limitations, and the use of AI to address societal challenges.
Frontier Model Forum
On July 26, Google, Microsoft, Anthropic, and OpenAI announced an industry-led body to advance AI safety research, define best practices, facilitate information exchange between policymakers and the industry, and address societal challenges. The advisory board will oversee strategy, collaboration, and safe development of large-scale AI models, aligning with global AI governance and risk mitigation.
OECD Principles
In 2019, an Organisation for Economic Cooperation and Development (OECD) recommendation set the first intergovernmental AI standard, emphasizing human-centric values, fairness, transparency, security, and accountability. In February of 2022, the OECD released an AI classification framework to help policymakers, regulators, and legislators evaluate the benefits and drawbacks of AI systems. The OECD also maintains an AI Policy Observatory that provides real-time information on AI policies across the globe, promotes evidence-based policy analysis, and fosters international dialogue on AI’s areas of impact.
United States
Federal Blueprint for an AI Bill of Rights
In October of 2022, the White House Office of Science and Technology Policy introduced an AI Bill of Rights emphasizing proactivity, stakeholder consultation, and public reporting to ensure accountability, equality, and civil rights protection. The document outlines five core principles for AI systems that could potentially affect Americans’ “rights, opportunities, or access to critical resources or services,” including prevention of algorithmic discrimination, data privacy, transparency, and human alternatives for resolving issues. However, concerns remain about its enforceability and application.
NIST Risk Management Framework
In January of 2023, the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, released a voluntary AI Risk Management Framework for the development of AI products and services.
Other Initiatives In Development
On July 21, the White House announced that the Biden-Harris Administration is developing an Executive Order and pursuing bipartisan legislation to regulate and manage risks posed by AI.In Congress, several legislators have introduced or plan to introduce bills, including Senator John Thune’s Artificial Intelligence Innovation and Accountability Act, Representative Josh Harder’s AI Accountability Act, and Representative Yvette Clarke’s Political Ads Act, among others.
Meanwhile, an exhaustive array of other ideas and initiatives are in the works. The persistent gap between principles and substantive, enforceable legislation reflects the significant challenge of AI regulation.
European Union
The AI Act
Adopted by the European Parliament on June 14, the EU’s bill provides a risk-based approach to regulating AI. Applications with unacceptable risks, such as government-run social scoring, will be prohibited, while high-risk applications will have to meet legal requirements. The Act also requires companies to demonstrate that their AI systems are safe, effective, privacy-compliant, transparent, explainable, and non-discriminatory. For example, it limits facial recognition software and requires greater data disclosure. Set to become the world’s first comprehensive legal framework for AI, the EU Act establishes a global standard for responsible AI use, similar to the GDPR.
European Council’s AI Convention
This pioneering Convention is developing a comprehensive legal framework for AI within the context of human rights and democracy. It is set to introduce legally binding individual human rights, extending its scope beyond the EU and potentially becoming a global benchmark. By emphasizing non-discrimination, privacy, transparency, and the right to redress, the AI Convention could have a profound impact on healthcare and other sectors.
China
Interim Measures for the Management of Generative AI Services
The Cyberspace Administration of China has released interim measures that require AI providers to obtain licenses, undergo security assessments, and uphold socialist values, including refraining from inciting actions against the state. China heavily invests in AI across sectors, attracting significant funding and producing a substantial share of research.
Effective August 15, these rules address intellectual property protections, transparency, and discrimination prevention, while also emphasizing China’s strategic employment of AI to align with its political and economic goals. Predictably, every country is developing AI in line with its own social values and political agenda, which could make for a complex and inconsistent environment in which to deploy AI for global companies.