Notes by a co-founder //
When I first heard of ‘fake news’, it was usually linked to something Donald Trump had said – either Trump accusing newspapers of misinformation, or vice versa. Then it gradually became a more apparent topic. People began talking about media literacy, respectable figures were expressing public concern, schools were addressing the issue in their curriculum.
For a while, it still felt like a problem fairly removed from me and my world. I trusted the newspapers I read (probably because the people I trusted read them too). I was well educated, and I thought ‘fake news’ was mainly a problem for people who get informed on social media. That trust I had made me feel safe.
Then I moved to the UK. I started reading the ‘respectable’ British newspapers, and again felt safe and informed. That is, until I eventually got to articles about my country – a fairly large and relevant one. I had been born and raised there, explored all its corners by car and studied the country’s history, sociology and politics for many years in university. Compared to the general British audience, I was an expert. To my surprise, however, it became clear to me that even the most famous British newspapers sometimes spread misinformation about it. It came in different shapes and forms. Sometimes the information was just false, and sometimes presented through a limited and misleading perspective. There was often no concrete information but rather a value judgement based on widespread assumptions and not backed by facts. Sometimes, even, the misinformation was hidden in seemingly hand-selected images that reinforced inaccurate stereotypes.
I was puzzled. Could there really be some kind of conspiracy? After digging deeper, I realised this was actually a much broader and complex problem. I did find fairly extensive public evidence connecting newspapers to politicians, businessmen and institutions all around the world, which made it clear that news stories often do have an agenda. But, mostly, in our digital world of free and unlimited information, I realised newspapers need to publish fast, protect their audience and write shiny headlines to conquer more clicks, simply lacking the resources to thoroughly check all which they publish. Just like us, their journalists also have unconscious biases (see more about our biased opinions) and political preferences (see how are our opinions formed), and are mostly passing forward content made by other sources that aren’t properly verified (and might just feel right to them). More than that, they might not really be interested in fully verifying those stories when they corroborate their preconceptions (see why people lie).
Newspapers and journalists themselves acknowledge those issues as some of the main challenges to producing good quality content. The acclaimed journalist Christina Nicholson, for example, says newspapers often need to resort to third-party contributors and recalls how she used to write articles for a big US publication that were published “without any review.”
Still, most people are not willing to hear it. When I tried to warn people about the misinformation I read, most of them quickly dismissed me – they thought I was being self-interested, or informed by unreliable sources and faulty statistics (chances are you too were a little suspicious when you read this). I can’t really blame them, because up until that moment I had always done the same. They, too, thought they lived in a safe and informed world built by their trusted media, and wanted to protect that sense of stability. They too thought ‘fake news’ was a problem reserved to flat-earthers on social media and the political opposition. Admitting that one piece of information published by a trusted source was simply wrong was admitting a very hard truth: that we can’t really trust what we read, even when it is published by the most reliable newspapers (see how we sometimes want to protect the stories we’re told, especially when the storytelling benefits us). Of course, some newspapers are more committed to the truth than others, and there are ways of verifying whether the information we’re reading can be trusted (see how to avoid misinformation), but we can’t just blindly trust it or let ourselves be influenced by it – no matter where we read it.
“How could I really trust anything? How could I make sure I wasn’t just consuming stereotypes, distorted opinions and politically-motivated disinformation which would ultimately lead me to support the wrong people and the wrong causes?”
That realisation entirely changed my perspective. I could easily spot misinformation about my home country only because I knew it so well – but what about all the other information I consumed on a daily basis? How could I really trust anything? How could I make sure I wasn’t just consuming stereotypes, distorted opinions and politically-motivated disinformation which would ultimately lead me to support the wrong people and the wrong causes?
There are three critical obstacles to fighting misinformation. The first one is that no one can possibly have the time or energy to extensively investigate and double-check absolutely everything at all times. The second is that it is really hard to prove misinformation as most of it is not just simple lies (see Is fake news really fake?), but rather fragments of the truth selected to mislead us – and those fact-checking it are also people, with their own biases (as MIT Director Sinan Aral poses, “who checks the fact-checkers?”). The third is that, even if we could systematically prove misinformation, we should be extremely careful creating tools to actively punish it as those very same tools can easily become tools of censorship and control.
Any structural solution to the misinformation crisis must address those obstacles. It needs to be fast and work at large scale to keep up with the daily bombarding of information. Fact-checkers are great to spot plainly false information and are a key part of the solution, but they take a long time to investigate news and can only cover a handful at a time. The solution can also not be based on simply separating truth from lies (because most misleading information actually uses fragments of the truth to mislead us), but rather assess contextually and causality to identify biases. Perhaps the hardest part, it should ideally not be done by one person or a specific group of people, as they will always have some degree of bias, but rather by a combination of objective rules applied agnostically and a broader collective of readers. Finally, it should offer a way to punish that cannot be used to silence or, better yet, a way to reward those striving to produce good quality content.
Beehive News was designed with that solution in mind. It assesses thousands of articles daily, which are published by diverse newspapers from all sides of political spectrum spread across the world and grouped together by topic so readers can easily choose the most reliable ones to read. Ratings are done by unbiased algorithms that separate fact-based contextual articles from emotionally-charged news, complemented by an unlimited and smartly organised collective of people: all readers contribute to the ratings, but those who read more and a more balanced mix of news have more weight (just like in a hive, each does their part pollinating the world with better content). Those ratings are not binary – articles are not labelled true and false, biased and unbiased – but rather offer a scale of trustworthiness, or the likelihood an article can be trusted. By doing that, Beehive also creates a mechanism of accountability that directs resources to those producing good quality content, and thus sponsors good-quality journalism and fosters financial sustainability in the industry once again.
Beehive is a fully independent and purpose-driven organisation, not in any way connected to any newspapers, corporations or political parties. It was idealised, developed and tested by strategy advisors, misinformation-specialised journalists, psychologists and a range of other professionals that came together over the years to study the signs of misinformation and the alternatives to overcome this immense contemporary problem – many entirely as volunteers.
Of course, even Beehive will come under scrutiny. Not only is this fine, but an inherent part of the democratic process. Whereas we won’t disclose the rules behind our algorithms to avoid letting players cheat the system, we will always be entirely open about the objective criteria in them. After all, making good news is not a secret: objective, fact-based, non-emotionally-charged articles that provide full context for their numbers and facts and sufficiently demonstrate causality will always score higher.
Join the hive today using the buttons below and become you too an active collaborator of a world where we can trust information.