The Overlooked Aspect in the AI Safety Dialogue

The Overlooked Aspect in the AI Safety Dialogue

Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage.

In light of recent developments at OpenAI, the discussion on AI has shifted towards whether we should accelerate or decelerate its development and how to align AI tools with human needs. The conversation about AI safety has quickly turned into a futuristic and philosophical debate: Should we aim for artificial general intelligence (AGI) that can perform any task a human can? Is that even possible?

While this debate is crucial, it overlooks one of the main challenges of AI: it’s incredibly expensive.

AI Needs Talent, Data, and Scalability

The internet revolution made software accessible to everyone, lowering barriers with evolving tools, new programming languages, and the cloud. However, AI’s recent advancements have largely come from scaling up, requiring more computing power. We haven’t hit a plateau yet, which is why tech giants are investing billions in acquiring more GPUs and optimizing computers.

To build intelligent systems, we need talent, data, and scalable computing. The demand for scalable computing is growing rapidly, making AI a game for the few who have access to these resources. Many countries, let alone individuals and smaller companies, can’t afford to participate meaningfully. The costs are not just in training these models but also in deploying them.

Democratizing AI

According to Coatue’s recent research, the demand for GPUs is only starting. The firm predicts that the shortage could even strain our power grid. More GPUs mean higher server costs. Imagine a future where the current capabilities of AI systems are the least powerful they will ever be. They will only become more resource-intensive unless we find solutions.

Currently, only companies with significant financial resources can build advanced AI models. To promote AI safety, we need to democratize it. This way, we can implement appropriate safeguards and maximize AI’s positive impact.

Risks of Centralization

The high cost of AI development means companies often rely on a single model for their products. If this model fails or becomes degraded, it can have a widespread impact. For example, if OpenAI were to lose its employees and couldn’t maintain its stack, many companies would struggle.

Additionally, heavy reliance on probabilistic systems is risky. Our world is designed to function with definitive answers, but AI models are fluid and constantly changing. This unpredictability can disrupt the code supporting these models and the results that customers rely on.

Centralization also poses safety issues. Companies work in their own best interest, and if there is a safety or risk concern with a model, you have less control over fixing it or finding alternatives.

If AI remains costly and limited in ownership, it will widen the gap between those who can benefit from this technology and those who cannot, exacerbating existing inequalities. A world where some have access to superintelligence and others do not would create a significant imbalance.

One way to enhance AI’s benefits and safety is to reduce the costs of large-scale deployments. We need to diversify AI investments and broaden access to computing resources and talent for training and deploying new models. Data ownership will also be crucial; the more unique and high-quality data available, the more useful it will be.

Making AI More Accessible

While there are performance gaps in open-source models, their use is expected to grow, especially if the White House ensures open-source remains truly open. In many cases, models can be optimized for specific applications. The final stage of AI development will involve companies building routing logic, evaluations, and orchestration layers on top of different models, specializing them for various sectors.

Open-source models allow for a multi-model approach and give users more control. Despite current performance gaps, we may see a future where junior models handle less complex tasks at scale, while larger, super-intelligent models tackle more complex problems. You don’t need a trillion-parameter model to handle a customer service request.

We’ve seen AI demonstrations, rounds of funding, collaborations, and releases. Now, we need to bring AI to production on a large scale, sustainably and reliably. Emerging companies are working on reducing inference costs through specialized hardware, software, and model distillation. The industry should prioritize investments in these areas to make a significant impact.

If we can make AI more cost-effective, we can bring more players into the space, improving the reliability and safety of these tools. This will help achieve the goal of bringing value to as many people as possible.

Naré Vardanyan is the CEO and co-founder of Ntropy.

DataDecisionMakers

Welcome to the VentureBeat community! DataDecisionMakers is where experts and technical professionals share data-related insights and innovations. For the latest ideas, best practices, and future trends in data and data technology, join us at DataDecisionMakers. You might even consider contributing your own article!