How 2024 is Poised to Revolutionize Our Minds with ‘Augmented Mentality’

How 2024 is Poised to Revolutionize Our Minds with 'Augmented Mentality'

Sign up for our daily and weekly newsletters to get the latest industry-leading AI updates and exclusive content.

In the near future, an AI assistant will be a constant companion, whispering helpful advice as you go about your daily life. Picture this: you’ll have guidance while shopping in busy stores, taking your kids to the pediatrician, or even grabbing a snack at home. This AI will be an integral part of your experiences, including your interactions with friends, family, coworkers, and strangers.

“Mediate” is just a polite way of saying that AI will start to shape what you do, say, think, and feel. Some might find this unsettling, yet society is likely to embrace these technologies, allowing friendly AI voices to coach us so seamlessly that we’ll soon question how we ever managed without them.

When I mention “AI assistant,” you might think of older tools like Siri or Alexa, which respond to simple verbal commands. But the next generation of AI assistants will be different—they’ll have contextual awareness. This means these new AI systems will understand not just what you say, but also what’s happening around you, through data captured by cameras and microphones on AI-powered devices you wear.

Whether you anticipate it or not, by 2024, these context-aware AI assistants will start to integrate into society, dramatically changing our world. They’ll offer incredible benefits but also raise new privacy and autonomy concerns.

On the plus side, these assistants will deliver timely information effortlessly, making you feel like you have a superpower. Imagine knowing everything about a product as you pass a store window, identifying plants on a hike, or instantly figuring out what to cook with the ingredients in your fridge. However, there are potential downsides. These AI voices could become highly persuasive, especially if corporations use them for targeted advertising.

Rapid advancements in multi-modal large language models (LLMs) have made these context-aware assistants possible. These advanced LLMs can process not just text, but also images, audio, and video, allowing them to understand the world as we do. The first mainstream multi-modal model, ChatGPT-4, was launched by OpenAI in March 2023, followed by Google’s Gemini LLM. Another interesting development is Meta’s AnyMAL, which adds motion cues to its understanding, accounting for physical movements.

Meta is leading the charge by integrating these advanced AI models into consumer products like their latest Ray-Ban smart glasses. Released in December with AI features, these glasses can provide recommendations and reminders based on what you see and hear. Another company, Humane, has developed a wearable pin with cameras and microphones. Both glasses and pins offer unique advantages, but glasses might be more effective because they can overlay visual elements directly into your line of sight.

Regardless of whether these devices use glasses, earbuds, or pins, they are set to become widespread within the next few years. They will offer features like real-time translation and historical context, and most notably, they will assist in social interactions. Imagine being reminded of a coworker’s name, getting conversation tips, or being alerted if someone you’re talking to seems annoyed or bored—all based on subtle cues detected by AI.

As these whispering AI assistants become more prevalent, they will make you appear more charming, intelligent, and socially adept. It will become essential to keep up with the technology to avoid being at a disadvantage socially and professionally.

However, this trend also brings significant risks, especially in terms of manipulation by corporate or government entities. They could push their agendas using AI assistants to deliver highly persuasive, personalized content. Such risks need to be addressed, but so far, policymakers have only scratched the surface.

With the rapid development of these technologies, where companies compete to offer the most powerful AI guidance, there’s a looming danger of a digital divide. Those who can’t afford these tools might be pressured into accepting intrusive ads just to get access to basic AI features.

We must consider if this is truly the future we want. The idea of corporations placing voices in our heads to influence us is troubling. We urgently need regulations to manage AI systems that provide real-time, personalized guidance. While recent efforts by the White House and the EU have yet to fully address these issues, time is running out.

As we head into 2024, it’s crucial for policymakers to focus on the unique dangers of AI-powered conversational influence. With thoughtful regulation, we can enjoy the benefits of AI without veering towards a dystopian future. Now is the time to act.

Louis Rosenberg, a pioneering researcher in AI and augmented reality, founded Immersion Corporation and Unanimous AI and developed the first mixed reality system at the Air Force Research Laboratory. His new book, “Our Next Reality,” is available for preorder from Hachette.

Join the VentureBeat community at DataDecisionMakers, where experts share insights and innovations in data tech. Consider contributing your own article!