The Latest Insights from DeepMind on Our Progress Towards AGI

The Latest Insights from DeepMind on Our Progress Towards AGI

If you’re keen on staying up-to-date with cutting-edge AI happenings and exclusive insights, signing up for our daily and weekly newsletters is the way to go. We’ve got the scoop on everything about groundbreaking AI developments.

Let’s dive into the hot topic that’s got the science world buzzing: the quest for artificial general intelligence (AGI)—the kind of AI that could match or even surpass us humans across the board. It’s a pretty heated debate, really. Some think AGI is way off in the future, while others say it’s just around the corner, hinting that we’re already seeing glimpses of it in our current smart language systems. And get this, a few of these brainy folks actually believe the tech we have now might already be AGI!

Over at Google DeepMind, the brains behind the operation, including top-dog AGI Scientist Shane Legg, have been breaking their heads to make sense of it all. They’re cooking up a whole new way to sort out what’s what in the AGI world, looking at how these systems and their early versions behave and what they’re capable of. They’re pretty adamant about the AI research gang needing to get their heads together and pin down exactly what AGI means. They want to measure AI in terms of how it performs, how versatile it is, and how independent it can be.

Getting into the nitty-gritty, one mighty challenge with AGI is nailing down its definition. The DeepMind team pored over no less than nine definitions—including some classics like the Turing Test, a test here for making coffee, and even checks for how “conscious” a machine is. They pointed out that each of these tests sort of misses the mark on what AGI really entails.

Looking at our present-day language whizzes, sure, they can chat well enough to dupe us into thinking they’re human, but that alone doesn’t make them AGI. Because, well, they still trip up in ways that show they’re not all there yet. The whole consciousness thing is a tough cookie to crack, too. And while a bot that can’t brew a cuppa Joe in a strange kitchen isn’t AGI, one that can doesn’t automatically graduate to AGI status either.

So, the DeepMind crew suggests checking out AI through six fresh lenses:

1. Judge AI on what it can do, not on whether it thinks or feels like us.
2. Eye both the range of stuff an AI can handle and how well it does them, to make sure it’s really up to scratch.
3. Think about whether an AI can handle brainy and self-aware tasks, but don’t stress if it can’t move or handle physical tasks.
4. It’s cool if an AI has the potential to rock at AGI-level tasks, even if we can’t quite use it yet. Getting it out into the world brings up all kinds of other non-techy issues, like laws and safety.
5. Focus on tasks that matter to us in the real world, ones that actually mean something.
6. Finally, remember that reaching AGI isn’t just one big event. It’s a journey with different stages along the way.

To plot out how smart these AI systems are getting, DeepMind’s drawn up a nifty chart. It maps out “performance” and “generality” over five levels, from zero AI to AGI that’s blowing humans out of the water in everything. “Performance” checks how an AI stacks up to us, while “generality” is about how many different things the AI can ace.

For example, we’ve already got AIs that are champs at specific tasks—like AlphaZero and AlphaFold. The chart helps to sort AI systems by their skill levels. Even our latest chatbots are getting good marks for their handle on simple tasks, but there’s room for them to level up in other areas like math or strategic thinking.

DeepMind points out that even if an AI system looks good on paper, it might not hold up quite as well in real life. Take those fancy text-to-image gizmos—they can dream up pictures that’d give most of us a serious case of art envy. But, they still mess up in ways that keep them from reaching top artist status.

Their big idea is that a true AGI test would throw a ton of different tasks at an AI—everything from wordplay and math puzzles to social smarts and creativity. They’re also clear on the fact that no list of tasks can cover everything an AGI could potentially do. So, they imagine a test that evolves over time, with new challenges added as we dream them up.

DeepMind’s also got a separate chart for figuring out how independent an AI is and what risks come with letting it do its thing. This one has levels running from all-human to all-AI, and it flags the kinds of problems we might