In late March, citing concerns that “not even the creators of powerful new artificial intelligence systems can understand, predict, or reliably control them,” more than a thousand AI sector experts and researchers published an open letter in Le Monde calling for a six-month pause in research on artificial intelligence systems more powerful than the new GPT-4, or Generative Pre-Trained Transformer 4 and is linked to the popular ChatGPT.
Max Cacchione, director of innovation with University of Miami Information Technology UMIT), and David Chapman, an associate professor of computer science with the Institute for Data Science and Computing (IDSC), both dismissed the feasibility of any such moratorium.
“Zero chance it will happen. AI is like a virus—and you can’t contain a virus,” said Cacchione, also the director of Innovate, a group which supports and implements innovative technology initiatives across the University. “You can put a rule or law in place, but there’s always someone who will get around it both nationally and internationally.”
Chapman pointed to the intense competition in the industry as a major factor no pause would be enacted.
“If we pause AI research, who else is going to proceed to develop the technology faster than us? These new tools and models are really coming to market now and, if we don’t pursue them, then someone else will be making those advances,” Chapman said.
Cacchione though highlighted that the concerns outlined in the letter were warranted.
“The only thing that’s preventing a disaster right now is that AI is contained in an environment where it’s not actionable—it’s not connected to commercial airlines, a nuclear facility, a dam or something like that,” Cacchione said. “If it were connected right now, it would be in a position to cause a lot of damage.
“The problem is that AI is an intelligence without any morals and guidance,” he added. “It’s without a soul, so it’s going to do what’s most logical—and it won’t feel bad about us or factor in the long-term survival of humanity if it’s not programmed to do so.”
Recently, an art tool produced by AI image generator Midjourney was used to generate a number of false images—Pope Francis in a puffy white parka and Donald Trump being arrested and then escaping from jail. The small startup has since, at least temporarily, disabled the free trial options, but the brouhaha prompted media outlets to decry the absence of oversight.
Cacchione stressed that there is no single regulatory body responsible for regulating AI research and relatively few specific regulations focused solely on AI.
He identified, though, a range of organizations and agencies including the European Union, United Nations Group of Governmental Experts on Autonomous Weapons Systems, the Institute of Electrical and Electronics Engineers, the Partnership on AI, and the Global Partnership on AI, among others that are working to develop guidelines and frameworks for the ethical and responsible use of AI.
Cacchione also mentioned efforts to regulate AI at the U.S. federal level, pointing out that in 2019, Congress established the National AI Initiative Act to coordinate federal investments in AI research and development. The bill also included provisions for establishing a national AI research resource task force, promoting AI education and training, and addressing ethical and societal implications of AI.
Chapman noted that, historically, regulatory policy has always lagged between technological advances and that, if this were not the case, advances important to humankind would be stymied.
“The idea that AI can be used to create false content among others—these are just things that society’s going to evolve to,” Chapman said. “Regulations for AI are going to catch up and progress over time, and societal norms will change as we have access to more powerful tools that are ultimately going to help us live more productive lives.”
Cacchione pointed out that AI research dates to the 1950s, when computer scientists first began to explore the concept of creating intelligent machines. The term “artificial intelligence” was coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference.
He highlighted the many milestones and the remarkable development pace of the past decades that have resulted today in self-driving cars, medical diagnosis, and robotics.
“The potential applications of AI are vast and include improving healthcare, addressing climate change, space exploration, and advancing scientific research,” he said. “While there are still significant challenges to be overcome, AI has the potential to revolutionize many aspects of our lives and create new opportunities for innovation and progress.”
Yet, while recognizing the tremendous upside, Cacchione highlighted the parallels between AI and Crypto and the potential for misuse in both sectors.
“Both have the potential to be used for malicious purposes, such as money laundering, fraud, or cyberattacks,” Cacchione said. “This potential for misuse has raised concerns among regulators, who worry that these technologies could be used to undermine national security, financial stability, or consumer protections.”
The innovation specialist noted that both sectors can be characterized by decentralization, with the effect of operating outside of traditional regulatory frameworks and without being subject to the same types of oversight as other industries. This can make it difficult for regulators to enforce existing laws and regulations or to develop new regulations that can effectively address the unique challenges presented by these technologies.
Both specialists concurred that AI has transitioned to a new phase, from research and development to commercialization.
“People have been doing really impressive things with generative adversarial networks, and AI image generation software has been in development, at least on a small scale in research labs, for the past eight years or so,” Chapman noted.
What’s new and different, he said, is the amount of computing resources and the data people are now investing into training these models.
“The biggest change in the last year that we’re starting to see the machine learning, the deep learning hit the mass market,” he said. “It’s not just research software anymore; you can actually see tools such as ChatGPT that have been in research for the past decade or so finally starting to go into production, and you start to finally have access to that technology.”
Chapman highlighted AI’s benefits and potential to save both cost and labor and improve efficiency. He emphasized that ultimately AI is a tool, an algorithm, that is based on data analysis and statistical modeling and that depends on humans to provide input.
It can now create images more quickly and that those images can be of anything you’re wanting to do—for example creating special effects for a movie.
“That’s a great use of this technology, and something that would save a lot in terms of cost. The experience would be better just because you’re able to automate the process of creating images,” Chapman said.
“So, the question is who is using artificial intelligence and for what purpose?” Chapman said.