Critical insights from the IAPP Global Privacy Summit 2024
Critical insights from the IAPP Global Privacy Summit 2024
The International Association of Privacy Professionals' Global Privacy Summit returned to Washington, DC, last week, with regulators, companies and practitioners digging into a plethora of hot topics including AI governance, the EU's AI Act, privacy-enhancing technologies, US state privacy law, child privacy, healthcare privacy and more.
If you couldn't make it, keep scrolling for ungated insights from MLex, or start your 14-day free trial now for full access to our comprehensive reporting from the world's most important annual privacy gathering.
Editor's letter
8 April 2024
Mike Swift
Chief Global Digital Risk Correspondent
When the world’s privacy community assembled in Washington, DC, this time last year, generative artificial intelligence had just struck their world like one of those rogue big-ocean waves that rise out of nowhere and barrel through everything in their path.
As regulators, in-house lawyers, academics and technologists gathered for the 2024 edition of the International Association of Privacy Professionals’ Global Privacy Summit last week, there was a much greater sense of knowledge and control. For many attendees, the conference centered on training how to oversee the ethical and lawful deployment of AI systems. On the last day, 200 people sat a three-hour exam on AI technology and laws that, for those that pass, will allow them to certify as an “Artificial Intelligence Governance Professional” — winning the AIGP certification from the IAPP. That designation signals that AI governance is increasingly seen as its own profession and expertise, one with a significant overlap with data privacy, but distinct nonetheless.
In part because of the technological step change that AI represents, there was a strong civil rights component to this year’s GPS discourse, and that concern came through in many of the nearly 50 news stories filed from the conference by MLex. Australia’s newly minted Privacy Commissioner, Carly Kind, framed the equation simply in her general session keynote: Personal data in general and AI in particular is about power, she said, and the key “is about how we redistribute and rebalance power through the law.”
Other regulators, such as Rohit Chopra of the US Consumer Financial Protection Bureau, had similar thinking. Chopra said he fears that both the US and Europe are “lurching on this road to a more surveillance-oriented society. Many of those things we already observe in certain societies, particularly in China, and some of that is really out of step with, I think, how we see our free society and open market economy.”
One constant about the Global Privacy Summit is that it just keeps growing, and the IAPP’s AI governance leadership means that will likely continue. MLex sent its largest-ever team of specialist journalists to the 2024 edition, covering all of the key events and talking one-on-one with national enforcers from the EU, the US, the UK, Australia, South Korea, Singapore and beyond.
Global privacy summit wrestles with AI, data protection across digital 'empires'
5 April 2024
New regulators took the spotlight, while others seemed to vanish into the shadows. A new profession, an “artificial intelligence governance professional,” was born. Systems of digital regulation were broken into three “empires” — one from China, one from the EU and one from the US.
The International Association of Privacy Professionals' Global Privacy Summit 2024 attracted in-house data protection lawyers, regulators from every continent beyond Antarctica and representatives of the tech companies who hold all that personal data.
European regulators contributed to the growing sense that the EU's General Data Protection Regulation — the 2018 privacy rulebook that led the way around the world — is now fully embedded. Now, they want to get on with enforcing the GDPR. US state privacy enforcers from California and elsewhere made it clear they’re ready to start ramping up enforcement efforts too.
The US Federal Trade Commission, the lead US data privacy regulator, which has caught a great deal of heat for its recent antitrust enforcement actions, kept a decidedly low profile, however. Still, the FTC has spoken forcefully over the past year, through its surge of aggressive and often creative enforcement actions.
Across GPS24, there was a call for a human-centered view of data protection — that it’s not just about the mechanics of enforcing laws but viewing privacy protection as a bulwark against technocratic authoritarianism, whether by governments or by corporations.
One thing that hasn’t changed is the lack of a national comprehensive privacy law for the world’s largest economy, the US. The expanding patchwork of state privacy laws frustrated privacy professionals, and federal regulators reiterated their annual calls for Congress to pass a national law.
European view
For regulators from the EU, there was a sense that the GDPR has reached a level of maturity, and that the time to squabble over jurisdiction is past.
Speaking on panels and in interviews with MLex on the sidelines of the conference, European privacy regulators said they are busier than ever. John Edwards, the head of the UK data protection authority, said his office received 35,000 complaints last year and Ulrich Kelber of the German federal watchdog said his team took part in more than 1,000 meetings nationally and at the EU level.
But there was none of the uncertainty and infighting that characterized the first few years of GDPR enforcement. Nor are there big, geopolitical open questions like whether the Irish regulator is being soft on Big Tech, or what to do about data transfers.
Now it's a case of interpreting legal nuance, applying the law to new technologies like artificial intelligence, and understanding the steady flow of case law out of the EU Court of Justice.
As officials from around the world debate how to regulate AI, the question in Europe is more practical: which authority will enforce the AI Act, the world’s first comprehensive regulation.
Anu Talus, the chair of the European Data Protection Board, would like the data privacy watchdog to get the job. The decision is up to each EU government, and data protection authorities, such as in France and Italy, are lobbying hard to get the job (see here).
"What we underlined in our joint opinion with the European Data Protection Supervisor was that the enforcement should be centralized and it should be the same authority that is supervising the GDPR," Talus told MLex.
US states
Privacy enforcers from California and other states were the most visible US enforcers at GPS24. Rebecca Kelly Slaughter was the only FTC commissioner who spoke. She made an effort to make no news, and largely succeeded.
Slaughter did note that the FTC scored a key victory in its litigation against Kochava, an Idaho data broker that sells location data, when a federal judge blessed the FTC’s “theory that unfettered data collection of sensitive geolocation data” could be a violation of the “unfair” prong of Section 5 of the FTC Act (see here).
State regulators were everywhere. Ashkan Soltani, executive director of the California Privacy Protection Agency, said the “kid gloves are off,” saying companies have had six years to prepare for the start of enforcement this year from the first standalone data protection authority in the US (see here).
The CPPA is building up its investigative teams, which will have the power to issue subpoenas and compel testimony. The agency is also hiring an audit team, including a chief auditor, which will be going directly to companies so they can walk regulators through how their systems operate, he said.
The agency issued its first enforcement advisory this week reminding companies that they shouldn’t be collecting "excessive and unnecessary" personal information from consumers attempting to exercise their data rights under the law, and CPPA enforcement chief Michael Macko said more advisories are coming.
State attorneys general from Connecticut and Colorado, whose state privacy laws took effect last year, said they’re pleasantly surprised businesses have been willing to work closely with them.
“Companies responding have not only made fixes, they’ve gone broader,” Connecticut Deputy Associate Attorney General Michele Lucan said, “and that is something we love to see.”
While state legislatures have been busy passing privacy laws, the US Congress has not. There was very little insight this year about if, how or when a new version of the American Data Privacy and Protection Act — the national privacy bill that passed out of a House committee in 2022 — would be introduced. When Democrats held the gavel two years ago they were consulting with privacy professionals about the proposed law. Now with Republicans in charge, there’s been little insight into the process.
AI profession
Last year’s GPS was about the shock and awe of generative AI, as the sudden rise of ChatGPT in the months before last year’s conference felt like a wave hitting the privacy world. This year, practicalities, and the need to create a new AI governance profession, move to the fore.
Privacy practitioners said AI and its close association with data protection has added a new dimension to their work. Privacy officers in both public and private institutions now see an increase in AI-related work, and they are expected to play a crucial role as the technology continues to evolve.
Privacy officers from Texas, Washington and other states now routinely examine the role of AI in their responsibilities. For example, they have incorporated AI in the personal information assessment process and examine contract terms related to the use of AI for state procurement. The goal is to make sure that there are proper guardrails in place while innovation is being enabled (see here).
Thanks to common principles shared by data and AI governance, the privacy community’s experience and knowledge put privacy professionals in a position where they are contributing to the regulation of AI.
State government privacy officers are expected to become “chief architects of good decision governance” and help “create guardrails that enable learning [and] individual empowerment and are still socially anchored" (see here).
On the last day of the conference, a group of 200 people sat down to take a three-hour exam on artificial intelligence. When the results are known toward the end of April, those that pass will be the first to earn certification as an artificial intelligence governance professional (AIGP) from the IAPP.
The IAPP isn’t trying to define AI, or mandate how AI governance works, but to chart out how the AI governance profession organizes itself and what types of core competencies professionals need to ethically manage AI systems (see here).
The conference wasn’t an anti-technology event, with several regulators saying they expect to depend on privacy-enhancing technologies — PETs — to reduce the need to profile people to target ads or for other commercial purposes.
PETs will be expected to meet data protection principles for certain sectors, warned Stephen Almond of the UK’s Information Commissioner Office. Currently considered “nice to have,” the UK data protection authority will begin to require PETs in finance and healthcare, for example, he said (see here).
Singapore is seeing success with a similar sandbox initiative, said Denise Wong, deputy commissioner of Singapore’s Personal Data Protection Commission. But for PETs to take hold, organizations are seeking clarity on when they’re required and viable. Until they're told to take up the potential PETs solutions, businesses may see the switch is too costly, Wong has found (see here).
Human-centric privacy
In her keynote address at the start of the conference, Anu Bradford, a law professor at Columbia University in New York, described digital privacy regulation as an ideological struggle between three “Digital Empires” — the Chinese state-driven model; the American market-driven model; and the European “rights-driven” model, focused on protecting the fundamental rights of individuals and a fairer distribution of the gains from digital transformation.
“The US companies were set free to take over the world, and that’s exactly what they have done,” Bradford said.
Others echoed the theme that data protection isn’t just about protecting individuals’ privacy rights, but about protecting democracy and open markets.
Rohit Chopra, director of the US Consumer Financial Protection Bureau, painted a dystopian vision of pervasive surveillance through browsers, fitness trackers, cars and payment apps. Chopra warned of the threat to a free society from the “lurch” of Big Tech toward a melding with Big Banking, a combination that, without checks, could allow commercial surveillance to run amok (see here).
“It makes you wonder, when you're driving in your connected car to your psychiatrist’s office, or to do anything in your day-to-day life, how much of that is feeding into AI and black box models that ultimately are going to be used to make decisions about other parts of your life, including about whether you're suitable for employment or credit,” Chopra said.
Carly Kind, the new privacy commissioner within the Office of the Australian Information Commissioner, sounded a similar theme around AI and privacy (see here).
“Privacy law is, at its heart, about power,” said Kind, a former human rights lawyer attending her first GPS. “Control of personal information is a form of exercising power, and equally determining one's own personal information is a way of being empowered and exercising power.”
This story was reported and written by Amy Miller, Sam Clark, Xu Yuan, Madeline Hughes, Jenn Brice, Claude Marx, Matthew Newman and Mike Swift.
For full access to our specialist reporting and analysis from this significant event, start your instant free trial today.