Law and Politics Science and Technology

Balancing innovation and protection on the AI frontier

A recent AI Safety Summit in the United Kingdom and the newly commissioned U.S. Intelligence Safety Institute are critical first steps by governments to explore how to deal with the benefits and threats posed by artificial intelligence as it continues to evolve.
AI meeting in England
Britain's Prime Minister Rishi Sunak, center, speaks during a plenary session at the AI Safety Summit at Bletchley Park in Milton Keynes, England, on Thursday, Nov. 2. Photo: The Associated Press

The artificial intelligence genie is out of the bottle and has become ubiquitous in our lives. Its newest model—Frontier AI—offers immense opportunities and potentially catastrophic risks for humanity, developments that were explored by representatives from 28 countries attending the AI Safety Summit held Nov. 1-2 in the United Kingdom.

University of Miami experts Andres Sawicki, professor in the School of Law and director of the Business of Innovation, Law, and Technology Concentration, and Sara Rushinek, professor in the Miami Herbert Business School’s Department of Business Technology, monitored the summit’s outcomes. They are among the many scholars exploring the vast dimensions of the technology that uses machines to mimic human decision-making and problem-solving. 

Both experts emphasized that governments—which historically lag in terms of managing new technologies—need to educate themselves to create frameworks to ensure that AI is trustworthy and helpful, rather than destructive and deceptive.

Andres Sawicki
Andres Sawicki

Sawicki contextualized the regulation dilemma.

“The general problem particularly in technology is that the government doesn’t know as much about things as the private actors,” he said. “Google knows more about what its self-driving cars can and cannot do and, in the classic example, the tobacco companies knew far more about the harms of smoking than the FDA [U.S. Food and Drug Administration],’’ he added.

“In this context of AI, you have companies coming to the government and saying, ‘Hey, this product we’re developing, you should treat like you treat nuclear weapons,’ ” Sawicki continued. “That’s a very unusual thing to happen.”

Given the ask, it would be extremely irresponsible for the government to do nothing. 

“You might think these companies are exaggerating the risk that their technology poses. You might think there’s some risk of regulatory capture. But at the most fundamental level, when the private actors are telling you, ‘Treat us the way you treat nuclear weapons,’ the government has to respond,” Sawicki said. 

The first ever global conference on artificial intelligence, held at Bletchley Park in Milton Keynes, England—home of the first modern computers built to break Nazi war codes during World War II—served as a very basic first step in the direction of international collaboration.

Sara Rushinek
Sara Rushinek

It was encouraging that China was there, in addition to the United States, European nations, and Japan. Though the downside was that Russia, Iran, and North Korea were not, Sawicki noted. While there were no substantive commitments or agreements at the summit—realistically too premature to expect—the attending nations agreed to another summit and to create a State of the Science Report in advance of that.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCon, are all considered to be the fathers of the current generation of AI. Bengio will participate in generating the report, Sawicki noted.  

“That’s important because it speaks to the issue of the governments not knowing as much about this as do the academics and companies and people working on the front lines,” he said. 

Rushinek has been collaborating on a range of AI-related University initiatives, including with an integrative medicine group using AI to assess bloodwork and recommend treatments, and at the School of Nursing and Health Studies exploring how AI-powered chatbots can enhance the responses of mannequins at the Simulation Hospital Advancing Research and Education.

“What I see here is everybody getting on the bandwagon, and AI is not a fad at all. We see a lot of other technologies that come and go, but this is substantial because it really starts having that user interface that everyone can see the benefits and the disadvantages,” Rushinek said.

To promote safe use, she urged following the example of how companies have approached cybersecurity and other new technologies, using “red-teaming”—assigning personnel to act as “the enemy” to expose systems’ vulnerabilities as a way to ultimately improve their defenses against genuine threats.

Rushinek categorized the recent Biden administration landmark executive order establishing new AI safety standards and commissioning the USAISI consortium as a good start that should be continued.  

“The question now is training and education,” she said. “This is where universities have to go; rather than being afraid of AI, they need to teach students how to create systems on top of it.”

Rushinek has engaged in an academic partnership with the San Diego-based firm “Personal.AI” that has visited her class, demonstrating to students how to use its AI model to create a digital extension of a personal brand.

Sawicki, a specialist in the area of intellectual property, highlights the tremendous potential of AI in both education and law. 

“I’m more of an AI optimist and see the tremendous potential in this technology to improve humanity,” he said. “As an educator, the potential of these tools to improve the quality of the educational experience and reduce the cost is tremendous.”

While optimistic for the beneficial uses of the technology, Sawicki emphasized the imperative for governments to generate regulatory framework to safeguard against existential threats—and recognized the challenges they face.  

“What makes the current version of AI so potentially dangerous is that you can do just about anything,” he said. “That makes it really hard to figure out how to measure the risks it poses. Imagine the tests you’d need to do to determine if electricity is safe? That’s the challenge they [personnel at the new USAISI] face.”

The most important question for Sawicki is whether the government has the personnel qualified to address the task at hand. 

“Currently, there is not enough tech capacity in the federal government to do a good job of this, but the Biden administration’s executive order is sensitive to this fact and is engaged in efforts to acquire that expertise,” he said. 

“Still, it’s an initial step, an information-gathering exercise that lays the foundation,” Sawicki added. “The risk is here. The administration needs to be attentive and not to take too heavy a hand and so risk losing out on a lot of the positive impacts the technology can have.”