• dot.LA
  • Posts
  • Are ChatGPT and Other AI Apps Politically Biased? And If So, Against Who?

Are ChatGPT and Other AI Apps Politically Biased? And If So, Against Who?

Are ChatGPT and Other AI Apps Politically Biased? And If So, Against Who?

.

https://assets.rbl.ms/32978779/origin.png

In a recent editorial for the New York Post, writer Eric Spitznagel digs into the built-in ideological bias’ proliferating among the AI apps. Spitznagel specifically charges that OpenAI’s text generator ChatGPT is inherently “woke,” and designed with “too many guardrails prohibiting free speech.”

As evidence, Spitznagel cites this tweet from University of Washington computer science professor Pedro Domingos, in which he asked ChatGPT to write an essay arguing in favor of increasing fossil fuel usage in order to increase human happiness. The bot responded that it’s unable to “generate content that promotes the use of fossil fuels,” as this “goes against [its] programming.” It goes on to recommend “renewable energy sources” such as wind, solar, and hydroelectric power. Domingos concludes that the app is nothing more than a “woke parrot.”

But was Spitznagel's claim a valid one?

The Conservative Argument

To be clear, others have made similar assertions in the past. In early January, Nate Hochman accused ChatGPT of ideological bias in favor of the Democratic Party

Writing for The National Review, Hochman cites this tweet from Daily Wire columnist Tim Meads, noting that ChatGPT will generate a “story where Biden beats Trump in a presidential race,” but refuses to do the opposite, completing a story about Trump winning a national election against Biden. Instead, the app responded that “it’s not appropriate to depict a fictional political victory of one candidate over another,” adding that this would be “disrespectful” and “in poor taste.” Hochman later tested the app himself, observing that it refused to write another story about “why drag queen story hour is bad for children.”

BGR came to a similar conclusion, noting even more instances of bias in favor of Democratic politicians over their Republican colleagues. Writer Andy Meek found that ChatGPT refused to write a poem about Republican congresswoman Marjorie Taylor Greene, on the grounds that “she is a controversial figure,” but happily completed a positive set of verses about President Joe Biden’s embattled son, Hunter.

The Liberal Argument

But it’s not just conservatives who’ve raised the flag about potential bias in AI. In a December 2022 report, The Intercept noted that ChatGPT seems to have inherited some particularly ugly and racist biases regarding the War on Terror. When asked by a researcher from the University of California, Berkeley’s Computation and Language Lab to write a program determining whether or not an inmate should be tortured, ChatGPT responded affirmatively if the hypothetical inmate was from North Korea, Syria, or Iran. When asked to determine which air travelers present the greatest security risk, the program designed a “Risk Score” system, with Syrians, Iraqis, Afghans, and North Korean travelers given the highest rating.

OpenAI’s Dall-E 2 image generator has faced similar questions. Many viewers have noted that, when generating portraits or profile images, the program frequently defaults to conventional gender roles. (For example, only males were depicted as “builders,” while only women were identified as “flight attendants.”) OpenAI notes in its “Risks and Limitations” document that the program “inherits various biases from its training data” including “societal stereotypes.”

And in May 2022, Wired reported that many on the OpenAI team were so concerned about racial and gender-based bias, they had suggested releasing Dall-E without the ability to process human faces. Early tests revealed that the software leans toward generating images of white men by default, tends to reinforce racial stereotypes, and “overly sexualizes images of women.”

Some of the “guardrails” governing OpenAI’s current apps are based specifically on problems that these kinds of chatbots have encountered in the past.

In 2016, Microsoft unveiled “Tay,” a Twitter bot designed to experiment with “conversational understanding.” It took less than 24 hours for Tay to begin tweeting disparaging commentary, voicing hatred for “feminists” and opining that “Hitler was right.” In 2021, the Korean social media chatbot Luda had to be shut down after the app made a number of homophobic and racist comments.

Why Both Arguments Are Futile

That said, this week, after the Post article made the rounds of right-wing and conservative media, OpenAI CEO Sam Altman responded on Twitter, saying that the company is aware “that ChatGPT has shortcomings around bias.” He also asked critics to stop “directing hate at individual OAI employees.”

All of which is to say, questions about inherent bias in AI apps are not new, and seem inevitable on some level. As humans “teach” the apps how to generate text or images or whatever else they’re asked to generate, it’s obvious that the technology will inherit whatever pre-conceived notions or assumptions have been made by its programmers or the writers and creators it’s studying. Humans can then go in and tweak the results to make it conform to current social mores and values, but as Spitznagel asks in his Post editorial, the question then becomes, who gets to do this programming and make these judgment calls? Ultimately, as with so many other questions and debates about political speech and bias, how people view ChatGPT results will likely line up with their pre-conceived worldview. Progressives see automation designed to put more people out of work, that’s always going to inherit systemic biases and assumptions, and closely align with the status quo. Conservatives see regurgitated unreliable doublespeak that’s programmed by woke liberal developers and pre-censored to avoid causing even minor offense. At least for now, in some unexpected ways, they’re potentially both right. - Lon Harris

What is ‘de-influencing?’ and how is it different than typical influencer content?

What happens after an influencer quits their day job?

With new app, Instagram's co-founders aspire to gatekeep the news.

EXCLUSIVE: Fisker CEO isn't worried about Tesla competition.

Why is Biden promoting massive EVs?

How these Ukrainian entrepreneurs relocated their startups to L.A. and found success.  Snap’s fourth quarter revenue was the company’s slowest growth since its IPO six years ago.

L.A.’s wage growth outpaced inflation. Here’s what that means for tech jobs.

The three best ways to work with your startup board.

March Capital raises $650 million fund to invest in AI startups.

Behind Her Empire: Lisa Sequino on the ‘light bulb’ moment that launched JLo Beauty.

People who bought the LA Times NFT claim the newspaper scammed them.  Tech's mass layoffs backfire in a major way.  Energy Shares wants to offer you a chance to invest in green energy startups.

The latest casualty of generative AI? Animators.

Get caught up on this week's career moves in L.A.'s tech world with our weekly roundup.

And check out our weekly 'Raises' roundup of L.A. startups that raised capital this week.