The official student news site of Dougherty Valley High School.

The Wildcat Tribune

The official student news site of Dougherty Valley High School.

The Wildcat Tribune

The official student news site of Dougherty Valley High School.

The Wildcat Tribune

AI won’t destroy the world. But we should still be worried.

Tom Cowap
HAL 9000, from the movie “2001: A Space Odyssey”, is one of the many AI takeovers portrayed in popular media since the mid-1900s.

For decades, popular media has depicted artificial intelligence as a potential menace, with dystopian visions of sentient machines wreaking havoc in movies like “2001: A Space Odyssey,” “The Terminator,” or “I, Robot” often show them gaining sentience and attacking humanity.

Today, AI is more prevalent than ever before. The development of popular AI models such as ChatGPT and Stable Diffusion has led to mainstream discussion about the capabilities of AI.

This has caused widespread concern that we’re headed towards an apocalyptic future, with robots overthrowing humans and dominating the world. Most notably, billionaire tech entrepreneur Elon Musk has repeatedly warned of the dangers of AI, saying that “It has the potential of civilization destruction.”

However, AI gaining sentience and attacking humanity isn’t actually a realistic outcome, especially with our current technological landscape. Consider ChatGPT: arguably one of the largest and most sophisticated AI models ever created, with hundreds of billions of parameters and being trained on trillions of words scraped from the Internet. Yet its capabilities are limited to receiving some text and guessing what word should come next. That doesn’t leave it room for sentience or to pose a threat to humanity.

The reality is, even our most advanced AI models are still nowhere close to the human-like intelligence depicted in popular media.

Another increasingly common concern in the public is the likelihood of AI taking over the workforce. Yes, technological innovation often leads to changes in jobs as industries replace human workers with machines or computers. But this isn’t necessarily a bad thing. For instance, the Industrial Revolution, where the use of machines first became widespread, led to more jobs being created and an overall improvement in living standards. It’s likely that the development of AI will do the same, creating employment opportunities that will benefit society. Right now, we already have prompt engineers, who design the text prompts being sent to ChatGPT in order to make it produce coherent results. As more AI technologies are developed, who knows what other jobs will be created in the future?

Even with these rapid changes, AI still needs human supervision, as it is used in the industry. For example, factories that largely use machines still have humans monitoring them to make sure that operation goes smoothly. With AI, this will probably stay the same, since it is supervised by humans to make sure nothing goes wrong.

However, this doesn’t mean that public concern about AI is completely unfounded. In fact, there are many reasons why we should be worried. Although AI taking over the world or the workforce is unlikely, there are still plenty of issues with the emerging technology.

One of the main issues with AI is a lack of interpretability. Since AI models are frequently composed of intricate algorithms with millions of variables and complex relationships between them, it can often be difficult to explain how an AI model arrived at a particular decision.

Having clear, human-readable explanations behind an AI’s decision-making is extremely important. For instance, if a medical AI that diagnosed patients’ diseases was ever deployed in the future, understanding the underlying reasoning behind the diagnosis would be extremely helpful for doctors to verify the results and communicate with patients. 

Another issue that closely ties in with a lack of interpretability is decision bias. Current AI models are good at picking up on patterns from the data given. However, they may pick up on patterns that may not align with our values, leading to them making morally gray decisions.

What does this mean? Take a look at the use of AI in the criminal justice system. Here, it is used, among other tasks, to determine the sentence of convicted criminals. The problem is that these AI systems are trained on historical crime data. Studies have shown that people of color receive longer sentences than whites. As a result of this, an AI trained on historical crime data may give minorities harsher sentences.

It’s extremely important that the public is aware of the potential bias in AI. Since the sentencing decision is made by a computer, many people may incorrectly assume that the decision must be objective, when it is in fact perpetrating the preexisting biases present in its training data. This doesn’t just apply to the criminal justice system, either. Any application which uses AI will have to deal with the same problems of biased training data. It is critical that people understand the limitations of AI to avoid propagating historical bias into the future.

So although the doomsday scenarios of AI destroying humanity always get the most public attention, these issues aren’t very realistic. Instead, we should focus on addressing valid concerns about the technology to ensure it’s used responsibly and ethically.

Leave a Comment
More to Discover
About the Contributor
Eugene Kwek
Eugene Kwek, Multimedia Manager
Eugene joined the Wildcat Tribune because he heard from his friends that it was fun. This year is his second year of journalism and his first writing for the Tribune. Eugene's journalism goal for this year is to write about technology, which he's really passionate about. In his free time, he likes to code interesting projects and explore the trails around San Ramon. An interesting fact about him is that he is a GeoGeussr addict. If Eugene could be any other person on the Tribune, he would be Leo because he is funny and cool.

Comments (0)

All The Wildcat Tribune Picks Reader Picks Sort: Newest