Amnesty vs automation — how companies are preparing for AI regulation

Amnesty vs automation — how companies are preparing for AI regulation

6 September 2024
By Frank Hersey

Artificial intelligence regulation is beginning to proliferate around the world. Enterprise use of AI tools is also expected to grow rapidly as businesses make use of platforms to create their own systems they can trust enough to ply with their own precious data. 

The compliance industrial complex is already in gear. 

Platforms that let companies build and adapt AI tools can also assess them for compliance in real time. Yet despite generative AI’s penchant for error, humans also pose significant AI risks to companies. Any enterprise needing to comply with AI regulation or policy should audit their workers’ use of the technologies, lawyers and data protection experts say. 

Growth markets

The likes of China and the EU are being joined by additional jurisdictions introducing AI legislation. Individual US states are gathering pace and the new UK government has its thinking cap on. 

Multinationals of all sizes need to start complying if they’re using or selling AI products in these jurisdictions. 

Tech heavyweight IBM is hoping to help companies create their own AI tools and make sure they’re compliant with company policy and any local, national and international policies of their choice. The platform, Watsonx, is IBM’s plug-and-play attempt to capture the growing enterprise market now the consumer market is held by a few major players. But it can’t do everything.

The platform offers a selection of existing large-language models, including Meta’s open-source Llama options, Amazon’s offerings and its own range, Granite. Customers can then introduce their own data to fine-tune models, with the idea being that a company builds its own transparent, trusted tool.

The customer also chooses any pieces of legislation it wants to comply with, as well as their own policies. Then Watsonx generates a series of questions for a compliance officer to go through.

“We use automation to be able to help [compliance staff] answer with the type of specificity, but the overall answering of the questions requires a human in the loop,” Ritika Gunnar, IBM's general manager for data and AI, told MLex.

Full automation is not imminent, legally or technologically. “I think there’s a lot of progress to be made before that’s even considered,” Gunnar said.

Even so, the approach could have a democratizing effect on the adoption of AI tools, Gunnar said, especially considering the cost savings as smaller, cheaper-to-run models allow companies to “achieve the same or better outcomes at a fraction of the cost, and we’ve shown some of that now when it comes to the regulation — our belief is that we can also make that accessible.”

The library of regulations is growing as IBM is doing deals with other compliance organizations, such as Credo.ai, which are creating compliance models for other legislations — such as from US states. 

Human error

Real-time dashboards can give vital information about compliance with regulations, but companies also need to understand what their people are doing outside corporate systems, lawyers and privacy experts say.

Company policies around not using AI come up against staff seeking productivity gains installing software, using browser-based generative AI or abandoning work computers to do certain tasks.

“People are using it on their phones, they’re using it on their personal devices,” James Moore, co-founder of Flex Legal, told a conference on AI in the legal sector.* “We’ve got to deliver a safe space for our employees to use technology. And banning this is not going to be an option.”

Amnesties are a solid approach to this, data protection lawyer Jenna Franklin, partner at Stephenson Harwood told an AI conference.** “An amnesty [allows people to] basically confess what AI they’re using within the organization.”

This then allows companies to see what problems they could imminently face, as well as what tools their staff find most helpful.

As AI is embedded in ever more products and services, companies will need to find out what their suppliers’ policies and compliance procedures are for areas such as data protection, Franklin said. 

GDPR again

Another comparatively traditional tool for compliance is already having a good regime for data protection, as many of the possible AI risks relate to customer data.

“There is a strong argument to suggest that those that invested heavily at the time of GDPR implementation are actually in a much better position now for implementing AI governance,” Franklin said, referring to the EU’s General Data Protection Regulation and its UK twin.

Victoria Horden of law firm Taylor Wessing reflected a similar view, telling another conference*** that there’s a “natural mirroring of requirements, controls, measures” where businesses build AI governance on top of their data protection governance.

It’s a concept that rings true across the data protection community. At events attended by MLex in recent months, two further opinions became clear within the data protection sector. 

First, while they agree that AI compliance mirrors GDPR compliance conceptually, data protection officers do not want to be expected to be the ones policing this in their companies on top of everything else. 

They find they’re already becoming their companies’ de facto AI compliance officers, with or without training. And they’re already consistent in being able to share AI near misses and horror stories, albeit not on the record to journalists.  

Second, the community in London is almost unanimous in its distaste for the prospect of having to deal with the EU AI Act in any way at all. No doubt more companies will be targeting this market with technology to simplify AI compliance.

* “Focussing on Integrating AI in Legal Settings,” Flex Legal and LexisNexis, online, July 31, 2024
** “The AI Summit London,” June 12-13, 2024
*** “Westminster eForum: The future for data protection in the UK,” online, Sept. 4, 2024

For the latest developments in AI & tech regulation, privacy and cybersecurity, online safety, content moderation and more, activate your instant trial of MLex today.

blue and yellow star flag

desk globe on table