Economy

AI will be smarter than humans in 20 years, ‘Godfather of AI’ warns

Pinterest LinkedIn Tumblr

(NewsNation) — Geoffrey Hinton, widely known as the “Godfather of AI,” warned that artificial intelligence will likely become smarter than humans within the next 20 years.

During a Monday interview on NewsNation’s “CUOMO,” Hinton said his primary concern isn’t the immediate risks of AI like job displacement or cyberattacks, but rather the longer-term challenge of maintaining human control over systems that exceed human intelligence.

“There are very few examples of smarter things being controlled by less smart things,” Hinton said. “We don’t know whether we’re going to be able to stay in control of super intelligent AI.”


Man who ‘proposed’ to AI girlfriend says he’s now bored with it

Will humans lose control of AI?

The computer scientist acknowledged that stopping AI development isn’t realistic, given its beneficial applications in health care, education and other fields. 

He cited AI’s potential to improve medical diagnoses, accelerate drug development and provide personalized tutoring that could help students learn twice as fast as traditional classroom instruction.

When asked about humanity’s future relationship with AI, Hinton gave an analogy comparing it to a mother-baby relationship, where “the mother is smarter than the baby, but the baby’s in control” because evolution programmed the mother to care for the child.


AI reshapes job market as Gen Z turns to blue-collar work

Call centers will be replaced by AI

Regarding job displacement, Hinton predicted that positions in call centers and paralegal work would be among the first to be automated, as AI systems will possess more knowledge than current human workers in these roles. 

However, he said that jobs requiring manual dexterity and problem-solving in unpredictable environments, such as plumbing work in old buildings, would remain safe for at least awhile.

While robotic capabilities are advancing, Hinton said that physical robots are developing more slowly than large language models, though he expects this gap to close over time.