Global AI regulation taking diverse approaches, from rules-based to principles-based
Global AI regulation taking diverse approaches, from rules-based to principles-based
Artificial Intelligence's quantum leap in the past year with the advent of generative AI systems like ChatGPT has spurred the world’s regulators to move with unprecedented speed, although with significant differences in approach. Keep scrolling for our review of how Europe, Japan, South Korea, the UK, the US, India and Brazil are confronting a technology that has stoked fears but promises to transform societies and the economy.
Keep scrolling for the story from MLex®, or start your 14-day free trial now for full access to news and analysis on AI regulation from our correspondents around the globe.
2 January 2024
By Matthew Newman and Mike Swift
Artificial intelligence has been catapulted in little over a year from the realm of The Terminator and its fictional killing machines to kitchen-table concerns about mass unemployment, job discrimination and credit scores.
In development for decades, the technology made a quantum leap recently on the back of collective advances in cloud computing, chips and smartphones that generate ever more zettabytes of data each year.
That leap, made by generative AI systems such as ChatGPT, in turn spurred the world’s regulators in 2023 to move with unprecedented speed to confront it — in Europe, but also in California, which could enact extensive rules in 2024 for commercial AI.
The European Union has been at the forefront of regulating AI. The 27-nation bloc lived up to its reputation as a global regulatory leader when EU lawmakers reached a political agreement on the AI Act in December. The deal capped four years of talks on how to rein in the AI systems that pose the biggest risks and ban some uses — including social scoring and predictive policing — that are the most dangerous to society and democracy.
The EU's risk-based approach centers on obligations that are lighter for less-dangerous practices and heavier when the risk increases. The approach is also the centerpiece for the voluntary code of conduct agreed by the Group of Seven nations in November.
While the EU's AI Act has inspired countries such as Japan and Brazil, others — such as the US federal government, the UK, Australia and India — are wary of imposing strict rules on a mushrooming industry that's already generating billions of dollars in revenue. They want to attract investment and become hubs for AI research and startups, and they still have their toes in the water before leaping in to regulate.
At the same time, they also want to reassure the public that there are guardrails on the most dangerous risks — from chemical, biological and nuclear catastrophes to cyberattacks, self-replication, and threats to human rights and democratic values.
The instant popularity of consumer-friendly AI chatbots, witnessed by the release of OpenAI's ChatGPT late in 2022 and Snap’s My AI in 2023, has put booster rockets on legislators around the world to take swift action. ChatGPT, which became the fastest-growing consumer application in history two months after its launch, has sparked excitement and fear.
Chatbots are capable of lies, fabrications and "hallucinations," such as when they insert fake footnotes into plausible-sounding reports. Generative AI can make deepfake images and videos of politicians and movie stars. An AI-driven image of the pope in a puffer coat prompted smiles around the world.
The EU's drive to wrap up its AI Act in 2023 contrasts with lawmakers in the US, UK and South Korea. Politicians in the US and the UK have expressed concern that over-regulation will drive away investment and innovation. Many countries will opt for a lighter-touch, principles-based approach.
In short, it’s too soon to know whether the European legislation will become the same global template for AI regulation that the General Data Protection Regulation has been for privacy laws around the planet.
South Korea, for example, is shifting gears from a rigid, rule-based framework to a more flexible, principles-based system; in the US, a presidential executive order has set up new standards for AI safety and security, and the UK is sticking to a hands-off policy — all suggesting that a universal buy-in to the EU’s approach looks less likely with AI than with data protection.
Europe
The European Commission, the EU executive's arm, set out a proportionate risk-based approach in 2021 that imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety.
The proposal didn't include high-performance foundation models and general-purpose AI — such as GPT-3 and GPT-4 from OpenAI, on which ChatGPT is based. During several months of negotiations, regulation of foundation models proved to be a major sticking point before a deal could eventually be reached on Dec. 8.
The European Parliament initially proposed regulating foundation models, which are the furthest upstream in the AI supply chain. EU governments wanted to start one step downstream, regulating general-purpose AI systems.
France, Germany and Italy insisted that they don't want any regulation of foundation models, concerned that over-regulation could scare away investments in AI startups, such as France’s Mistral and Germany’s Aleph Alpha.
After three days of talks, legislators agreed on a two-tier approach that would differentiate between “foundation models" — technology such as Open AI's GPT-4 — and "general-purpose" systems built on top of these models, such as AI apps that can create new content, including audio, code, images, text and videos, such as ChatGPT.
The draft accord will now be finalized into legal text during several rounds of technical meetings. A final text needs to be wrapped up by Feb. 9, in time for a plenary vote in the parliament in March or April, ahead of European elections in June.
United States
With the US home to many companies with an early lead in AI — emerging or existing tech giants such as OpenAI, Microsoft, Google and Meta Platforms — the regulatory debate there could have international significance.
Here could be a tale of two coasts. Leaders in Congress said in 2023 that they wanted to make AI a priority, summoning the likes of Elon Musk, Mark Zuckerberg and Sam Altman to an “AI forum” in September. While everyone agreed at the event that the federal government must have a regulatory role, there was no clarity as the year closed about what that role would be.
In late October, President Joe Biden released an executive order on AI that directs federal agencies to regulate their own use of AI, in turn creating a long-term market influence that prioritizes privacy and mitigating bias. But it remains unclear whether — or when — Congress might follow the lead of the EU to pass legislation.
As federal lawmakers struggle to find their footing on AI, lawmakers at the state level — particularly in California — will be the primary legislators writing AI-specific laws in 2024, a pivotal election year in the US. As the most populous state, California’s regulatory policies often become models for the rest of the country. So it could be with AI.
In the absence of a federal AI law, California is looking to lead the US with requirements for automated decision-making technology, or ADMT, and risk assessments. The California Privacy Protection Agency, or CPPA, released its first draft of proposed ADMT regulations in December, detailing opt-out, notice and access rights for consumers, including workers and children.
“While still only a proposed draft, in our view, the agency’s draft ADMT regulations are by far the most comprehensive framework in the commercial AI space,” said Maureen Mahoney, the CPPA's director of public policy.
Other states, such as Colorado, have ADMT rights baked into consumer privacy laws when the technology is used to make “solely automated” or “human reviewed” systems in consequential decisions, such as accessing housing or financing a loan. But California’s definition of ADMT would cover computation “as whole or part of a system,” either to decide or to “facilitate human decision-making” when it has “legal or similarly significant effects”.
California’s draft rules also suggest extending the ADMT rights and risk-assessment requirements to profiling in public places, monitoring of employees and students, and when a business uses personal data to train an AI system. Some CPPA board members and business groups fear the proposal is too broad and burdensome. The agency has agreed to revisit the proposal in 2024, after gathering additional feedback from economists and individual board members.
Japan
AI governance in Japan is decidedly being designed as a risk-based approach, borrowing heavily from the G7’s generative AI guiding principles and code of conduct, in which Japan was its chief architect as host of the group's meetings in 2023.
There are Japan-specific issues, however, such as how its machine-learning friendly copyright law should accommodate potential harms to right-holders by generative AI, as the country struggles to strike a balance between catching up its innovation to advancing its domestic Japanese large-language models and updating its laws and rules to protect existing industries — in particular, content creators — from IP infringement.
On Dec. 21, the Japanese government’s AI strategy group — comprising industry, tech and legal experts and cabinet ministers — revealed draft AI guidelines that are set to be finalized in March.
While not legally binding, the guidelines aim to consolidate various agency-specific guides, to provide a baseline for AI development, provision and industrial use. The principal objectives include transparency and fair competition, ensuring record-keeping during development and disclosure of training methods.
While the government discussion on AI guidelines was designed in accordance with G7 principles, concerns were raised about incentivizing innovation, from semiconductor access to creating the next generation of AI developers. Japan is looking to AI technology to boost the productivity of its shrinking population and aging workforce.
Japan’s industry-leaning approach to AI regulation is most apparent in its copyright law, the revision of which in 2018 allowed for nearly an unchecked use of copyright materials for machine learning. The challenging issue of how to prevent IP abuse without limiting machine-learning access to large data sets is currently under discussion under the Cultural Agency, which oversees the country’s copyright law.
India
India's stance on regulating AI has changed. Its latest rhetoric is that it should not repeat its mistake of an inadequate regulatory framework for Internet governance.
In April 2023, the government spoke of establishing principles that act as “guardrails” to regulate generative AI. Set to become a global player in information technology and AI, the Indian government feared that regulations could impede innovation, given its ambition to be a "global AI hub."
India has since transitioned from the abstract concept of self-regulation and diffuse responsibility for safety and trust to holding platforms legally accountable.
India announced its intentions to regulate AI through the lens of a "risk-based framework," thereby promoting innovation.
In December, Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, said India's broad approach to AI — which today is "through the prism of casting legal accountability on platforms for the safety and trust of that platform" — would evolve.
The government intends to adopt a hybrid model, rather than adopting the European or American approach to regulating AI. The idea is to allow the industry to regulate itself while prioritizing citizens’ rights.
Unlike some of the developed jurisdictions, such as the EU and the US, India will not have a separate legislation but will regulate AI under the proposed Digital India Act, which will replace the Information Technology Act 2000.
The onus to prevent misinformation or exploitative content causing user harm will be on social-media platforms, which could lose the "safe-harbor" immunity entitlement they currently enjoy under the IT Act. This provision holds platforms neither responsible nor liable for what third parties post on their websites.
Indian prime minister Narendra Modi has also urged the need for a global framework for the ethical use of AI, flagging the potential threat to global security if AI-powered weapons were to reach terrorist organizations.
Brazil
Discussions on AI regulation in the Brazilian Congress made significant progress in 2023. Inspired by European regulation, national authorities have been discussing rules for AI since 2020.
The risk-based regulation establishes general standards for developing, implementing and using AI in Brazil. The proposed legislation doesn't establish a body to enforce the future law, leaving the decision to Congress and the executive.
AI systems must be supervised by humans and respect certain principles, including non-discrimination and transparency. The proposal defines high-risk systems and the criteria that must be used to update them.
Amendments filed in December seek to limit potential updates to a list of systems considered high-risk, and to remove the assessment of people’s debt capacity from the high-risk list. They also aim to establish that biometric identification systems managed only by the public power must be considered high-risk.
Brazil's National Data Protection Authority stepped into the debate in 2023 and advocated that it should become the central body to regulate AI as it involves a large amount of personal data. The regulator said future AI legislation would partially overlap with the General Law for Data Protection, or LGPD, which is closely based on the EU's GDPR.
The Senate expects to vote on the bill early this year.
United Kingdom
The UK is a main proponent of the principles-based approach. The government spent 2023 trying to make a virtue of its reluctance to regulate AI.
It is positioning itself as a global center of innovation for AI technologies, business hub for tech companies and birthplace of a historic global drive for AI safety via the Bletchley Declaration.
In March the government published a light-touch policy paper on regulating AI. The intention is that existing regulators will oversee any developments in AI that fall within their existing remits, and this will allow the flexibility for innovation.
A forum coordinates the regulators spanning data protection, finance, online safety and competition. The principles-based approach has received praise at home and abroad, typically in contrast to the EU’s risk-based AI Act.
The UK’s regulatory highlight for AI was hosting the first global AI Safety Summit in November, at the home of the World War II codebreakers, Bletchley Park. Technically the event’s achievements weren't on regulation itself, but cajoling nations, including China, to agree to sign the Bletchley Declaration.
Almost 30 countries plus the EU pledged to work together to identify “safety risks of shared concern" and to build "a shared scientific and evidence-based understanding of these risks." The case was effectively a building block for global regulation.
As the year closed, the mood was turning on AI in the UK, where the public no longer sees it as a positive for society. “[We need to] make sure that 2024 isn’t the year that people lose trust in AI,” the country’s Information Commissioner and one of its AI regulators John Edwards said at a recent event.
South Korea
In a significant move to modernize its approach to AI regulation, South Korea is shifting gears from a rigid, rules-based framework to a more flexible, principles-based system. Spearheaded by the Personal Information Protection Commission, or PIPC, under the leadership of its chief, Ko Hak-soo, this new strategy is designed to better accommodate the rapid evolution and complex data requirements of AI technology.
The PIPC's comprehensive policy roadmap, introduced in August 2023, aims to balance the need for privacy protection with the secure use of data in AI. This includes the implementation of a principles-based regulatory system, aligning safeguards with technology risk levels, and fostering sector-specific guidelines through collaborations between the government and private sectors.
Notably, the roadmap introduces an advance-review system, which enables AI developers to have their data processing practices pre-emptively assessed for compliance with legal and safety standards. Moreover, this system offers potential exemptions from some administrative sanctions, fostering a more supportive environment for AI innovation.
According to the privacy regulator, these aspects of the policy will be further refined and elaborated with the help of the specialized AI Privacy Team, established in October. This involves active engagement in policy discussions with a diverse array of stakeholders, encompassing both domestic and international tech giants like Google, Meta Platforms, and Microsoft, as well as South Korean leaders such as Naver and Kakao, along with telecom firms and startups.
These wide-ranging consultations are intended to develop a pragmatic and effective regulatory framework that aligns with the evolving market conditions.
Meanwhile, several bills focused on AI are pending in South Korea's parliament. These legislative proposals aim to not only support the AI industry but also to ensure user protection, highlighting the importance of AI system trustworthiness. They propose more rigorous notification rules for high-risk AI services and the creation of certification processes to ensure their trustworthiness, as well as laying the groundwork for ethical AI guidelines.
Additional reporting by Freny Patel, Toko Sekiguchi, Jet Damazo-Santos, James Panichi, Saloni Sinha, Jenn Brice, Madeline Hughes, Frank Hersey, Ana Candil and Jenny Lee.
For access to breaking news and predictive analysis on AI regulation from our Data Privacy & Security journalists across the world, start your free trial today.

More from MLex
