Estimated Reading Time: 6 minutes
The first in our series of bias blogs covered gatekeeper bias, under which audiences can remain blissfully unaware of news events that the gatekeepers/publishers determine doesn’t fit the publisher’s particular agenda.
In this blog, we explore a second type of bias, known as editorial bias (sometimes called “statement” bias or media bias). With editorial bias, the editor or author of a news article consciously (or unconsciously) inserts their opinions into the “news” as facts, and in a circular bit of logic, their audiences accept the news as facts because, well, it’s the news and it’s from one of my news sources.
As a note to our readers—it is increasingly difficult to find any “news” from major (or even secondary) online publishers that is bias-free.
Separately, if you’re interested in seeing a well-diagramed media bias breakdown chart, visit: https://www.adfontesmedia.com/
What is Editorial Bias?
Editorial bias is closely related to, and in many instances the direct cause of, the phenomena of fake news—where opinions (or worse) of editors and publishers are taken at face value by an audience. This may be due to several reasons—for example, the audience trusts the news source for factual reporting of other news, or the “news” as presented by the editor readily accepts the news since it confirms their worldview—a sort of confirmation bias.
Gatekeeper bias (because many folks don’t even know the “story”) allows us to keep up the facade of normality when interacting at the water cooler, holiday time with the family, sporting events and social outings. Most reasonable (and even unreasonable) people will at least want to do a modicum of research before responding to something about which they haven’t heard a single thing.
With editorial bias, much of that bonhomme gets dropped—as even seemingly innocent statements or repeating a “news” story is immediately met by a passionate and contrary response from someone who has heard the same news, but with a completely different presentation of the “facts” of the news story. This has led to variations of the “You are entitled to your own opinion, but you are not entitled to your own facts” theme. In many cases, we are in fact arguing over the opinions of the editors, and not the facts themselves.
Identifying Forms of Bias for Web Content Categorization
While our mission at zvelo is to be the market’s leading web categorization provider, we are also interested in finding ways to identify why and how we are becoming more polarized, and some possible ways to raise awareness on this issue. There are other organizations and projects that are working towards similar objectives, such as Allsides: Media Bias Ratings, who are taking an interesting and crowd-sourced approach to identifying political bias of various publishers. These efforts are both illustrative of the issue, as well as helping to inform approaches to the AI/ML and training techniques being adopted by companies like zvelo.
Editorial bias matters greatly for companies like zvelo that are focused on highly accurate web content categorization. Trying to ascertain what are opinions vs. facts and how these are used and often mixed within other bits of web content makes for a considerable challenge in training humans, training AI/ML algorithms, and training humans who train AI/ML algorithms. It also makes it very difficult to get and keep categorization, fake news, and bias detection systems working effectively.
Examples of Editorial Bias In The Media
At the risk of offending readers who have been the unknowing victims of editorial bias, we offer up the following excerpts of “news” articles from the Fall of 2018 (articles linked following the excerpt)—with an attempt to find examples from across the publishing political spectrum that highlight editorial bias (which, as noted above, many people will accept as gospel and many people will immediately and passionately argue is not fact).
IMPORTANT: The following excerpts and commentary are not intended to persuade you or discredit/support the respective media outlets. They in no way represent the political beliefs of zvelo or our employees. They are provided solely to represent and showcase examples of editorial bias—AND to reinforce the importance of objective criticism for the purposes of overcoming the technical challenges associated with bias.
“Trump was not invited to attend McCain’s funeral, given bad blood between the two men, yet his presence was felt as contrasts were drawn with the current president’s harsh brand of politics.“
Source: The Associated Press (https://www.apnews.com/d27ee34e7c884d009aa175f36b507352)
Note the highlighted text in this article is the editor’s opinion. Albeit, an opinion that would be shared by many in the audience.
“EPA will take comment on its proposal for 60 days, and will no doubt be met with stiff opposition from environmental activists.“
Source: The Daily Caller (https://dailycaller.com/2018/12/06/trump-epa-coal-plants-ban-repeal/)
Once again, the highlighted text is opinion inserted into what would otherwise be a ‘news’ article.
“A major scientific report issued by 13 federal agencies on Friday presents the starkest warnings to date of the consequences of climate change for the United States, predicting that if significant steps are not taken to rein in global warming, the damage will knock as much as 10 percent off the size of the American economy by century’s end.“
Source: The New York Times (https://www.nytimes.com/2018/11/23/climate/us-climate-report.html)
In this excerpt, the yellow highlighted section is the editor’s opinion, while the blue highlighted text is unsubstantiated and could be considered fake news as there is no reference to a 10% reduction in the American economy in the report.
“In January 2017, a secret dossier was leaked to the press. It had been compiled by a former British intelligence official and Russia expert, Christopher Steele, who had been paid to investigate Mr Trump’s ties to Russia……Fusion GPS, the Washington-based firm that was hired to commission the dossier, had previously been paid via a conservative website to dig up dirt on Mr Trump.“
Source: BBC News (https://www.bbc.com/news/world-us-canada-42493918)
In this example, it is an error of omission by the editor—as the evidence has shown that the dossier was largely financed by the Clinton campaign and the DNC.
“President Donald Trump’s critics have spent the past 17 months anticipating what some expect will be among the most thrilling events of their lives: special counsel Robert Mueller’s final report on Russian 2016 election interference. They may be in for a disappointment……Perhaps most unsatisfying: Mueller’s findings may never even see the light of day.“
In this example, the editor adds their own emotion and feelings on the topic, projecting how “critics” might feel based on an undecided outcome.
“Wisconsin’s lame-duck, Republican-controlled state Legislature passed on Wednesday a host of measures designed to kneecap Gov.-elect Tony Evers, other Democrats elected to statewide offices and hurt the Democratic Party in general.”
Here, political leanings are made clear through word choice. The highlighted section omits details regarding said “measures” while using visceral and violent phrasing to accentuate the point.
When developing services that perform bias detection, as well as the role that bias plays in effective and accurate categorizations, the specific content being examined is obviously critical. However, it is sometimes also necessary to take a broader look at the context of the content, the historical biases of the editor and the publisher. These significantly complicate an already complex process, but are increasingly necessary to get as complete a picture as possible as to the categorization accuracy of the content being classified.
Maintaining Objectivity in Web Categorization Despite Inherent Biases
The creation of not just the AI/ML categorization and detection algorithms and services, but also the human-verified training and test corpora are essential to performing these processes to a satisfactory level. It is particularly challenging to remove inherent human bias from the individuals creating the training and testing data, as these biases end up being incorporated into the training of the AI/ML services.
Any companies making an effort to develop these types of services fully understands the dilemma of who will watch the watchers. In the examples above, sorting out what’s fact, what’s fiction, what’s opinion or worse—and how to train a diverse group of human analysts to spot the differences—are areas where we continue to evolve our thinking and approaches.
This is all part of our efforts to deliver the market’s best web categorization services (and maybe, just maybe, help reduce the polarization of our society).