The Potential of Unified Superintelligence

The Potential of Unified Superintelligence

Stay in the loop with our regular newsletters that give you the scoop on everything that’s happening in the world of AI. We’ve got all the juicy details you need to stay on top of the game.

“Superintelligence” is a hot topic right now. It’s all about AI that could one day be smarter than us in every way, from solving puzzles to painting masterpieces. Just a while back, it seemed like pure science fiction, but now some smart folks think it could be just around the corner—like, ten years or so. And that’s got some big brains and bigshots pretty worried. They’re thinking, what if this super-smart AI doesn’t get us humans, our ways, or what we want?

There’s a bunch of brainy people trying to fix this by making AI that gets us, really gets us. Take the team at Anthropic, they’re all about this thing called Constitutional AI, basically teaching AI our rules of thumb. Then there’s OpenAI, doing their thing with a chunk of their supercomputers to make sure AI plays nice.

But hey, I’m not sold on this just yet. These safety measures sound good, but can we really trust them to keep things in check down the road? It makes you think if there’s a better route to this whole superintelligence thing.

I’ve got a hunch there is. I’ve been deep into this thing, Collective Superintelligence (CSi), for the past ten years. The idea isn’t to outsource our smarts, but to boost them by linking up people to tackle problems that’d stump anyone flying solo. And while we’re at it, making sure our human touch is baked right into the mix.

Sounds a tad sci-fi? Well, it’s not that outlandish. Bunches of animals do stuff like this all the time. Think of how bees and fish move together like they’re reading each other’s minds without a boss. They’re not holding votes; they make these lightning-fast decisions by interacting in swarms that figure out the best move.

Why can’t we humans do that, right? That question kicked off my own quest. I set up this place called Unanimous AI in 2014, and we started playing with ways to make people act like those smart swarms. Our early tries were simple—people would move things on a screen to show what they think, with AI algorithms peeking in to see who felt strongest about what.

Turns out, it really ramped up our group smarts. Like, when a CBS reporter dared us to pick the winners at the Kentucky Derby. Spoiler alert: we nailed it, and it turned $20 into a cool $11,000. Sure, there was a dash of luck, but it wasn’t just a fluke—it was our swarm smarts at work.

Creating a full-blown Collective Superintelligence sounded like a tall order, though. Our tricks worked for small fry problems, but the real deal had to handle the big ones. So we needed tech that could let lots of people chew over tough stuff seamlessly using—the best tool ever—language.

Here’s the noodle-baker: people talk best in small groups before turning into a jumbled mess when the crowd gets too big. Seems like an unbeatable problem for Collective Superintelligence. That is, until AI made a leap that lit up new ways of making swarms of chatterboxes.

What we’ve got cooking now is Conversational Swarm Intelligence (CSI), and it’s a game-changer. Imagine getting together crowds of any size to hash out complex problems on the fly and really getting to smart answers with the help of swarm smarts.

Here’s how we pulled it off: looking at how fish schools do their thing. A fish has this special sense to feel the vibes of its pals, so they all move as one without anyone leading the dance. Each fish checks in with just a few others, so everyone gets the bigger picture without any confusion.

Can humans pull off these swarms of conversation? You bet. With a trick called hyperswarms, we can split a big ol’ group into snug chat rooms of five, each tackling the same tough cookie. But to get that swarm magic, you need the buzz to spread. Enter AI agents, kind of like the fishes’ sense, mixing into the chats and sharing the lightbulb moments across the board.

And guess what? It works. We ran a test, took a page from an old bit where a bunch of folks guessed an ox’s weight and the average guess was freaky good. We did the same with a jar of gumballs: surveys alone were way off, and even our own fancy ChatGPT wasn’t spot on. But when we used our swarm method? Bam, way closer to the mark—it wasn’t even close.

So what? Well, in the here and now, CSI tech