The Student News Site of Northern Michigan University

The North Wind

The North Wind

The North Wind

Meet the Staff
Rachel Pott
Rachel Pott
News Writer

I am a marketing major about to start my second year at Northern Michigan University, however, this will be my third year in college. I previously attended a small community college...

The North Wind Editorial Sessions
About us

The North Wind is an independent student publication serving the Northern Michigan University community. It is partially funded by the Student Activity Fee. The North Wind digital paper is published daily during the fall and winter semesters except on university holidays and during exam weeks. The North Wind Board of Directors is composed of representatives of the student body, faculty, administration and area media.

Opinion — Its okay to outgrow your college friends
Opinion — It's okay to outgrow your college friends
Megan PoeApril 12, 2024

Left-wing values will define AI

Left-wing values will define AI

Hollywood portrayal of the future never fails to amuse. We’re beginning to reach an era where past movies and shows are set in the years we’re entering, and often, the predictions are way off. For instance, the timeless classic “Back to the Future II” is set in the year 2015 and features flying cars and self-drying jackets. Unfortunately, these ideas have not quite come to fruition.

One Hollywood idea that has become reality, though, is the development of artificial intelligence (AI). However, it’s not quite as depicted in Will Smith’s “iRobot.” AI isn’t walking around on two legs—it’s embedded in the digital systems we use every day.

In creating AI, humans also created a competitor that they cannot beat. Humans are rational beings, but balanced between intelligence and emotion. With AI, in a sense, we have extracted the former and often more desirable, isolated it and quantified it. Now, we are embedding it in our systems, largely replacing roles once filled by humans.

The internet and other digital systems are foundational to modern technology, and thus modern life. Our phones, cars, social media platforms and even banking are all seeing the introduction of AI technology. Living without all of these things in 2019 is near impossible—digital technology is now the basis upon which our world runs. Business is more likely to be conducted on a .pdf than paper. Your work associates communicate over e-mail, and your friends over social media. We function in the context of a digital world.

Story continues below advertisement

It’s important to understand that to create functional AI, they must be encoded further than just calculations and cost-benefit analysis. They must be encoded with morals. Morality is a tool necessary to make consistent decisions. We know there is not always a right answer, and sometimes it comes down to a question of values. This is just as true with AI.

For example, Twitter has already moved to a system of behavior enforcement that doesn’t involve actual people to analyze tweets, but computer systems (human employees are still currently tasked with reviewing appeals). As Twitter moves toward AI, these systems will become further tasked with tweet and conversational analysis, acting as police for the platform and ensuring nobody infringes on rules. What happens, though, when the bot is forced to choose between protecting free speech or enforcing action against hate speech?

For example, let’s imagine a transgender person is in a Twitter argument with someone who doesn’t view transgender identity as valid. If the latter were to call the transperson by something other than their preferred pronoun, should the bot suspend them for abuse, or is it free speech? Your answer will depend on your values. The bot, though, must be able to make a decision, and thus must be encoded with some sort of morality.

In another example, let’s say two people run out in front of an autonomous car. The car can either hit them and kill them, or veer off and kill one person walking along the sidewalk. What does the car do? There isn’t necessarily a right answer, but AI must be able to choose.

The reason this raises an issue is because a huge swath of AI technology is coming out of one place: Silicon Valley. This is problematic when that one place happens to overwhelmingly subscribe to one political ideology and value system. A 2017 study conducted by political scientists at Stanford University concluded that those in the big-tech industry overwhelmingly supported left-wing social and cultural ideas. This means that those ideas are being embedded in the AI systems managing our digital world. For instance, in that Twitter example, the bot would choose to suspend the person using the wrong pronoun. This is not a future prediction—Twitter systems are already programmed to make this decision.

The argument here is not for one value system or another. Rather, it’s identifying the issue with allowing a select few common individuals to embed their ideals and values in our societal foundation. Hollywood predictions may often be uncertain, but we can be certain that the AI future will hold discord.

More to Discover