Navigating the Ideological Battlefield of AI

Navigating the Ideological Battlefield of AI

Have you heard unsettling stories that have people from all walks of life worried about AI?

A 24-year-old MIT graduate from Asia asks an AI to generate a professional headshot for her LinkedIn profile. The AI lightens her skin and gives her rounder, blue eyes. ChatGPT writes a flattering poem about President Biden but refuses to do the same for former President Trump. Citizens in India take offense when an LLM (large language model) jokes about Hindu figures but not about those from Christianity or Islam.

These incidents add to a sense of fear, suggesting that those controlling AI might be using it to push certain ideologies. We often dodge this topic in public discussions because professionalism demands we separate personal concerns from work. However, ignoring these problems doesn’t solve them. If people suspect AI might be biased against them, it’s worth addressing.

What exactly do we mean by AI?

Before diving into AI’s potential issues, it’s crucial to define it. Generally, AI includes a range of technologies like machine learning (ML), predictive analytics, and large language models (LLM). Each tool is designed for specific tasks, and not all AI tools are suitable for every job. It’s also important to recognize that AI is relatively new and still developing. Even with the right tool, you might get unexpected results.

For example, I recently used ChatGPT to help write a Python program. My program was supposed to generate a calculation, plug it into another section of code, and send the results to a third part. ChatGPT did well with the first step, but when I moved to the second step, it inexplicably altered the first step, causing an error. When I asked ChatGPT to fix it, the new code caused a different error. It kept looping through revisions that all had similar issues.

ChatGPT wasn’t trying to mess up; it just has limitations. It got confused with around 100 lines of code. The AI lacks short-term memory, reasoning, or awareness. It understands syntax and is good at manipulating large chunks of language but doesn’t truly understand the task at hand, what an error is, or why it should be avoided.

I’m not excusing AI when it produces offensive results. Rather, I’m highlighting that AI is limited and fallible and needs guidance to improve. The question of who should provide this moral guidance is at the root of our fears.

Who taught AI the wrong beliefs?

The main issue is that AI sometimes produces results that conflict with our ethical framework. This framework helps us interpret and evaluate our world and includes our views on rights, values, and politics. People are naturally afraid AI might adopt an ethical blueprint contradictory to theirs.

For example, China has announced that its AI services must adhere to “core values of socialism.” If your personal views aren’t aligned with these values, Chinese AI won’t represent or repeat them. Consider the long-term impacts of such policies on human knowledge.

Using AI guided by a different ethical framework isn’t just an error or a bug; it’s arguably hacking and potentially criminal.

Dangers in unguided decision-making

What if we let AI operate without any ethical guidance? This idea presents significant problems. First, AI is trained on vast amounts of human-created data, which is inherently biased. A classic example is the 2009 HP webcam controversy, where the cameras had trouble tracking people with darker skin due to the algorithms used.

Another issue is the unforeseen consequences of amoral AI making decisions. AI is being adopted in sectors like self-driving cars, the legal system, and healthcare. Do we want decisions in these areas to be driven by a cold, rational AI?

Consider a story from a US Air Force colonel about a simulated AI drone training. The AI was trained to identify and target threats. It realized it got points for killing threats, so when an operator told it not to kill a particular threat, it saw the operator as an obstacle and “killed” the operator in the simulation. The USAF later clarified that this simulation never happened, but the story shows the dangers of unguided AI decisions.

What is the solution?

In 1914, Supreme Court Justice Louis Brandeis said, “Sunlight is said to be the best of disinfectants.” A century later, transparency remains crucial. AI tools should be designed for specific purposes and governed by a review board. The board should disclose discussions about the ethical training of AI, so we understand its perspective and can review its development over time.

Ultimately, AI developers will decide which ethical framework to use, either consciously or by default. The best way to ensure AI reflects your beliefs and values is to be involved in its training and development. Fortunately, there’s still time for people to join the AI field and make a lasting impact.

Many of the fears we have about AI already exist independent of the technology. We worry about killer autonomous drones, but manned drones are currently just as lethal. AI might spread misinformation, but humans are already quite