
At the Ohio Regional Association of Law Libraries (ORALL) Annual Meeting, as I presented on the duty of technology competence in the algorithmic society, an astute law librarian asked (paraphrasing), “how does fake news play into this?” That question gave rise to a flurry of brain activity, as I considered how Google, for example, ranks relevancy, the rise of fake news, and the ability of users to spot fake news sources — particularly for legal research.
As I was presenting to a group of lawyers at a CLE this week, I polled them asking about the electronic resource that they primarily use for legal research. The overwhelming response was Google.
- The frequency and location of keywords within the webpage. If the keyword only appears once within the body of the page, it will receive a low score for that keyword.
- How long the webpage has existed: people create new webpages everyday. Google places more value on pages with an established history
- The number of other webpages that link to the page in question: Google look sat how many webpages link to a particular site to determine its relevance.
Out of these three factors, the third is the most important.
With the rise of fake news sources and given these factors that we know about Google’s relevancy ranking, there’s nothing to say that fake news cannot make its way into the top results based on a search query.
Couple this with the shoddy research habits of the “Google Generation” (those born 1993 and after), and you have a recipe for disaster.
These ingrained research habits generally equate with allowing algorithms to do the heavy lifting to decide what is relevant. The user, through hasty searching and vetting of results, has just allowed the algorithm to have a significant role in selecting the content that the algorithm deems should advance the law, even if that information is “fake.”
The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.
The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?Respondents were then asked to choose one of the following answer options:
- The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.
- The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.
Some 1,116 responded to this nonscientific canvassing: 51% chose the option that the information environment will not improve, and 49% said the information environment will improve.
The article goes into much more detail about the ability of algorithms to detect fake news and the limitations of natural language processing. It’s a great read.
Ultimately, we cannot rely on algorithms to detect fake news for us. As we train competent attorneys, we must continue to train them to be critical, evaluative users. This will be an increasingly uphill battle as the Google Generation and beyond enters law practice.