Reading Time: 7 minutes
I had some small hopes that generative AI—like Google Glasses and Facebook Smart Glasses—would be a flash in the pan. It still might be. But Microsoft and Google have now embedded it into their search and so it has become inescapable from an information searching perspective. It was interesting to me, then, to see a use case based on research that didn’t assume AI was a displacement tool for information needs.
The thing about the research is that it finally made generative AI make sense to me. I understand that it can write more novels than I can. It can create more readable search engines results pages (SERP) than Google or Bing, or any other search tool including library catalogs, can. But it has remained less useful for me as a searcher than if I just do it myself. If Bing Chat AI and Google Bard are just using the same search indexes I am, then my experience as a researcher is a variable in the outcome.
The research looked at over 5,000 customer service representatives and their use of an AI tool to help them provide more efficient answers. I haven’t read the pay-to-read research paper but the sound bites are promising:
One thing that I do regularly is watch similar-but-different information contexts for potential impacts and trends on libraries. Media and newspapers are one, because they’re obviously creating information products to reach people. Call center and customer service teams are another, because they frequently work in a role similar to our reference teams: conducting intake interviews, directing people to information, and so on.
It is not much of a leap to look at a modern law library and see the similarities. We’re experiencing challenges hiring new law librarians, for a number of reasons. But once they’re in the door, we need to get them up and running to be successful. A reference librarian who has been at the desk for 20 years (hopefully not all on one shift) probably won’t need a helper tool. But a new law librarian might.
How Do We Accelerate the Learning
The goal of the AI tool in the research sounds like it is meant to accelerate the learning of the new, less experienced employees. In money terms, make them more effective more quickly. Law libraries are the same even if we don’t have a financial function. If you look at your annual statistics, reference question data is going to be determined by how many reference librarians you can throw at questions being answered. The more librarians or librarian-like resources you can bring to bear, the higher your questions answered data point can be.
A new law librarian may cause that data to drop because not only are they not necessarily able to answer as many questions on their own, the more seasoned librarians may answer fewer while they divert time to assist their new colleague. This is a good thing, damn the data. People need to learn, people need to mentor, and colleagues need to learn to work together. A director’s role will be to explain that to a governance or funding board.
At the same time, your reference team has implicitly or explicitly created a data bank of reference questions that have been asked-and-answered. I’ve seen these stored in commercial or home grown reference question databases for recall or even as email templates that can be cut/pasted to send to a researcher. Sometimes they’re just in a seasoned reference librarian’s head, because, frankly, it takes too many resources to adequately save and curate knowledge in a the typical law library service environment.
We’ve leaned on the CALI exercises and our law library became a CALI member solely for that purpose. If they’re good enough for law students, they’re good enough for law library staff. The added benefit is that it gives the new employee some time to learn and answer questions without a patron (or supervisor or colleague) in play. They can rework a set of exercises until they gain some level of confidence or proficiency that they can bring to the reference desk.
If there were an AI application that might help in a law library, then, it could be a tool that helps accelerate that learning curve for the new law librarian. They could interview the researcher while simultaneously blending their own knowledge with the AI access to that reference question and answer data.
I have no idea if that’s workable. It’s not a huge audience—are there even 5,000 law librarians in North America?—and it would not be needed for very long. While there are large language models (when are we going to rename the LLM degree?), they surely haven’t used reference Q&A data and so who knows how well they could be trained.
If it were to be successful, it would probably need to lean on a lot (all) law library databases to create a singular resource that could be used while it was needed. I don’t see how there’s enough need within the legal vertical for it to be useful.
Because, at the end of the day, it’s a tool for increasing expertise. Even if you could create it, and you could expose it to people who are not answering legal research questions, would it have the same benefit to them? How would a new associate or a self-represented patron engage with this sort of tool? How many of our reference Q&A elements are based on our collection or our perspective or a law that has changed?
On the other hand, could a tool like that be pushed down to library schools teaching legal sources and accelerate their proximity to a reference law librarian’s expertise before they hit the job market? Is the benefit of a tool like this for people already doing the work or, like the CALI exercises, directed to people who are orienting themselves to a bigger world?
Everything In Its Place
I clearly have no idea. I am not at all confident that what we are watching unfold—unravel—in the field of AI is at all what we will eventually see settle out. I was hugely hopeful for AR-enhanced glasses, as someone who wears glasses. But not if I can only wear Google-designed hardware (I have opinions on frames) and only shows me enhanced shopping opportunities. Sign me up when we have a proper HUD.
A lot of what we seem to have access to is better web search. I think that benefit of that will mostly be to people who do not reach a law library. But, if they cannot get to a law library or do not have access to one, it’s not at all a bad place to be in. I think we can perhaps improve our content placement in that case, so that law library resources are exposed despite the researcher stopping at AI-assisted web search.
I don’t offer this as a suggestion that I think our jobs are or should be sacred. But there is nothing yet to suggest that the quality of nuance in legal issues can be answered with publicly-available AI tools.
Ironically, I wonder if the regulator of lawyers will suffocate the creation of AI tools for legal professionals. At the end of the day, as firms like Allen & Overy have found, someone has to look at whatever it is the AI generates. Lawyer regulators require lawyers to sign off and stand behind their filings and drafted documents.
AI search seems to be a better option for people who need some help in aggregating search results into an actual answer. Unfortunately, there’s nothing to suggest that an AI-assisted search is going to result in better results being used. Bing seems to use just its own search results, the same as if you’d run a normal search. Who knows what Google Bard is doing. I asked it a question and gave an answer with no citations to the court rules it was quoting.
At the same time, we direct people to primary sources to try to get them off the generic web searches. Hopefully, they will get to a source that has all of the hallmarks necessary to legal information: accuracy, currency, authenticity, and so on. A government web site may have terrible search or navigation, but it still may be better than the open web search index. The reason we pay a commercial legal publisher is because, in theory, they can bring both high quality search and content together. Perhaps they will eventually bring an AI-assisted search to further close the gap with the reference librarian.
But we still need to deal with the unknown unknowns. The thing the new associate or the self-represented research or the early-career law librarian doesn’t know to ask. And if they don’t know to ask it, how will the AI know to answer it?
The use of an AI tool to supplement early knowledge building for law librarians touches on another point that seems important. We may accept, as law firms seem to have, that AI will reach an expertise plateau beyond which it may not be able to reach. The wisdom of the seasoned lawyer or librarian or doctor or teacher or scientist or ….
Funders may still want to drop AI into the mix, though, below that expertise deck. It could weaken the option to bring in people to climb up that expertise ladder. If we have AI, do we need associates or summer clerks or new law librarians? But if you forego those roles, then you never get seasoned lawyers or librarians, let alone all of the other things that those people in those roles with that expertise can become.
If AI can’t replace all of the roles, then I don’t see how it replaces any that are on an expertise spectrum. So the idea that it can fit in beside an early career role makes a lot of sense to me. Like training wheels or water wings, it can help the person gain confidence and knowledge until they’re ready to fly solo.
The one thing I’m looking forward to is more actual research about how AI might work, if it will work, and then start to see realistic use cases. There is a lot of drama amid the gold rush and it feels very much too early in the hype cycle to get too worked up about what it might portend.