The future of AI regulation is up in the air: What’s your next move?


AI regulation has always been a hot topic. But with AI guardrails set to be dismantled by the incoming U.S. administration, regulation has also become a big question mark. It’s more complexity and a great deal more volatility for an already complicated compliance landscape. The VentureBeat AI Impact Tour, in partnership with Capgemini, stopped in Washington D.C. to talk about the evolving risks and surprising new opportunities the upcoming regulatory environment will bring — plus insights into navigating the new, uncertain normal.

VB CEO Matt Marshall spoke with Vall Hérard, SVP, Fidelity Labs and Xuning (Mike) Tang, senior director of AI/ML engineering at Verizon, about the significant-and-growing challenges of AI regulation in financial services and telecom, and dug into issues of risk management, the threat of accountability and more with Steve Jones, EVP of data-driven business and generative AI at Capgemini.

Accountability is a moving target

The problem is, Jones says, is that lack of regulations boils down to a lack of accountability, when it comes to what your large language models are doing — and that includes hoovering up intellectual property. Without regulations and legal ramifications, resolving issues of IP theft will either boil down to court cases, or more likely, especially in cases where the LLM belongs to a company with deep pockets, the responsibility will slide downhill to the end users. And when profitability outweighs the risk of a financial hit, some companies are going to push the boundaries.

“I think it’s fair to say that the courts aren’t enough, and the fact is that people are going to have to poison their public content to avoid losing their IP,” Jones says. “And it’s sad that it’s going to have to get there, but it’s absolutely going to have to get there if the risk is, you put it on the internet, suddenly somebody’s just ripped off your entire catalog and they’re off selling it directly as well.”

Nailing down the accountability piece

In the real world, unregulated AI companionship apps have led to genuine tragedies, like the suicide of the 14-year-old-boy who isolated himself from friends and family in favor of his chatbot companion. How can product liability come to bear in cases like these, to prevent it from happening to another user, if regulations are deprecated even further?

“These massive weapons of mass destruction, from an AI perspective, they’re phenomenally powerful things. There should be accountability for the control of them,” Jones says. “What it will take to put that accountability onto the companies that create the products, I believe firmly that that’s only going to happen if there’s an impetus for it.”

For instance, the family of the child is pursuing legal action against the chatbot company, which has now imposed new safety and auto moderation policies on its platform.

Risk management in a regulation-light world

Today’s AI strategy will need to revolve around risk management, understanding the risk you’re exposing your business to, and staying in control of it. There’s not a lot of outrage around the issue of potential data exposure, Jones adds, because from a business perspective, the real outrage is how an AI slip-up might impact public perception, and the threat of a court case, whether it involves human lives or bottom lines.

“The outrage piece is if I put a hallucination out to a customer, that makes my brand look terrible,” Jones says. “But am I going to get sued? Am I putting out invalid content? Am I putting out content that makes it look like I’ve ripped off my competition? So I’m less worried about the outrage. I’m more worried about giving lawyers business.”

Taking the L out of LLM

Keeping models as small as possible will be another critical strategy, he adds. LLMs are powerful, and can accomplish some stunning tasks. But does an enterprise need its LLM to play chess, speak Klingon, or write epic poetry? The larger the model, the bigger the potential for privacy issues, and the more potential threat vectors, Tang noted earlier. Verizon has a huge volume of internal information in its traffic data, and a model that encapsulates all of that information would be massive, and a privacy risk, and so Verizon aims to use the smallest model that delivers the best results.

Smaller models, made to handle specific, narrowly defined tasks, are also a key way to reduce or eliminate hallucinations, Hérard said. It’s easier to control compliance that way, when the data set used to train the model is a size where a full compliance review is possible.  

“What’s amazing is how often, in enterprise use cases, understanding my business problem, understanding my data, that small model delivers a phenomenal set of results,” Jones says. “Then combine it with fine tuning to do just what I want, and reduce my risk even more.”


Related News

Apple names the App Store’s best games and apps of the year

Apple names the App Store’s best games and apps of the year

Lower Decks sets up its finale elbowing its own ribs

Lower Decks sets up its finale elbowing its own ribs

Best Internet Providers in Aurora, Colorado

Best Internet Providers in Aurora, Colorado

Leave a Comment