A Google AI model developed a skill it wasn’t expected to have

From Yahoo Finance:

Concerns about AI developing skills independently of its programmers’ wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google’s executives may be adding to those worries.

In an interview on CBS’s 60 Minutes on April 16, James Manyika, Google’s SVP for technology and society, discussed how one of the company’s AI systems taught itself Bengali, even though it wasn’t trained to know the language. “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” he said.

Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: “There is an aspect of this which we call— all of us in the field call it as a ‘black box’. You don’t fully understand. And you can’t quite tell why it said this.” The CEO said the company has “some ideas” why this could be the case, but it needs more research to fully comprehend how it works.

CBS’s Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don’t fully understand, but Pichai responded: “I don’t think we fully understand how a human mind works either.”

AI’s development has also come with glaring flaws that lead to fake news, deepfakes, and weaponization, sometimes with so much confidence, in what the industry calls “hallucinations.”

Asked if Google’s Bard is getting a lot of “hallucinations,” Pichai responded: “Yes, you know, which is expected. No one in the, in the field has yet solved the hallucination problems. All models do have this as an issue.” The cure, Pichai said, is around developing “more robust safety layers before we build, before we deploy more capable models.”

Link to the rest at Yahoo Finance

source

(Visited 2 times, 1 visits today)