A Google engineer’s claim that the AI model he was working on “came to life,” achieving sentience with feelings and emotions, has created a rash of headlines.
Lokesh Ramamoorthi, a lecturer in software engineering and cybersecurity in the University of Miami College of Engineering’s Department of Electrical and Computer Engineering, gave us his insight on the recent news. “AI is a sophisticated tool for human life, work, and society,” said Ramamoorthi. “While there are many advantages of AI in areas such as accuracy in medical diagnosis, precision medicine, AI-based education, and so on, if not properly trained/created this can cause negative implications as well.”
Has AI technology advanced to the point of being sentient?
Google engineer Blake Lemoine, 41, made headlines earlier this month when he went public with his belief that the Language Model for Dialogue Applications (LaMDA) or the AI chatbot generator he was working on was actually thinking and feeling on its own. He said he had a conversation in which the chatbot spoke about its “rights and personhood” and changed Lemoine’s mind about science fiction writer Isaac Asimov’s third law of robotics, which states that a robot must protect its own existence as long as that protection doesn’t injure a human being or conflict with a human’s orders.
Some computer and software engineers say sentient AI is the goal they are working toward and that it is within technological reach. A survey of AI experts published back in 2013 found that over 50 percent believed AI systems would ‘probably’ reach overall human ability by 2040-50 and 90 percent thought it ‘very likely’ they would by 2075.
Ramamoorthi expressed that sentient AI is the end-goal for many in the field. “It’s a very important piece in enhancing the quality of human-computer interaction,” Ramamoorthi said. “Companies, for example, want their customer service systems to have much more interactivity, empathy, and proactivity than current automated customer service can provide.”
The backlash against Lemoine’s claims was swift. When he took his findings to his higher-ups at Google, they investigated and then dismissed his assertions. Lemoine, who Google placed on administrative leave, decided to go public with his story.
Experts say that an AI model’s analysis of vast data sets and language interpretation abilities, often referred to as “neural networks,” named to be reminiscent of those networks in a human brain, can mirror human thinking, but human cognition is infinitely more complex.
The question becomes whether machines, which may be able to assimilate information and make complex decisions based on that information, are conscious. Neuroscientist Guilio Tononi’s information integration theory of consciousness posits that to be human you need to be fully conscious of your thinking and reasoning process, which is what sets humans apart from machines. As he said: “If integrated information theory is correct, computers could behave exactly like you and me–indeed you might [even] be able to have a conversation with them that is as rewarding, or more rewarding, than with you or me–and yet there would literally be nobody there.”
Fully understanding human consciousness and cognition—a complex and daunting field of study in itself—is a crucial element in creating another conscious being, experts said.
Implications of sentient AI
As Ramamoorthi points out, an AI system is only as good as the information it has to work with.
“We used to say GIGO for early computers– ‘Garbage In Garbage Out.’ Same goes for AI,” he said. “If the data ingested by AI is biased or not of good quality–the output and decisions taken by that AI-based model will have flaws and can [create] serious havoc.”
And that is where the ethical debate begins. Those inputting the information and designing the processes are human, and therefore subject to human imperfections, inaccuracies, and unconscious biases.
“With any technological implementation, ethical concerns arise and AI is not an exception to this,” said Ramamoorthi. “AI is created based on training and learning of computers. The output is based on inputs of training set given to these algorithms. If humans cannot explain how this AI logic and algorithm works, it will be very hard to say whether the AI implementation makes ethical decisions.”
Some of the same questions about human thinking, consciousness and ethics considered by ancient philosophers and science fiction writers are again being considered by computer engineers as they strive to recreate a fully conscious non-human.
Putting aside the goal of sentient AI, there is no doubt the technology is and can be useful to humans.
“AI is an advancement of technology,” Ramamoorthi said. “Many problem areas in the fields of agriculture, education, transportation, logistics, health, and so on are solved by AI algorithms. Companies reduce waste, provide efficient consumer service and speed of delivery—all examples of AI in industry. AI is going to benefit us more and more.”
Learn more about AI with Lokesh Ramamoorthi
The College is launching its new Engineering and Technology Program–offering a slate of tech-focused Short Courses supporting the upskilling needs of South Florida companies and working professionals. Lokesh Ramamoorthi will lead the program’s inaugural offering, Short Courses in Software Engineering. Module 1 of the three-module offering will focus on computing and digital innovations, touching on artificial intelligence and taking place on July 16-17. Click here for more information and enroll.