Last week Google’s artificial intelligence, Deep Mind, beat a human at a handful of Atari games. That’s a big departure from the past Chess-champion computers, since it’s a lot harder to “solve” a game like Pong or Space Invaders, and it’s a huge sign that machine learning is well on its way to becoming the next breakthrough technology—and hopefully the legal industry can keep up.
But it’s often said that where technology moves fast, law moves slowly; after all, artificial intelligence is all around us, from spam filters to Cleverbot. Though the AI landscape could be completely different in a few years or even a few days, here are three angles that lawyers are already considering.
1. Man vs. machine
For the past few years many lawyers have been exploring machine learning as a way to cut costs and reduce man hours spent on discovery. Known as “predictive coding,” the algorithms enable computers to be taught what sort of documents would be beneficial to a case, so that they can take a seed set and find a larger trove of relevant documents or information. Predictive coding could be used to wade through and prioritize documents and replace the manual review process—and the human power—typically performed by attorneys.
It’s catching on quickly, especially after a New York judge approved the use of the process in 2012. Given that it can save users as much as 70 percent in costs, it’s a pretty good bargain. Some even think it’s too good a bargain, with one report from November of last year asserting that artificial intelligence will cause a structural collapse of law firms by 2030. As Dan Bindman explains for Legal Futures:
The report’s focus on the future of work contained the most disturbing findings for lawyers. Its main proposition is that AI is already close in 2014. “It is no longer unrealistic to consider that workplace robots and their AI processing systems could reach the point of general production by 2030… after long incubation and experimentation, technology can suddenly race ahead at astonishing speed.”
By this time, ‘bots’ could be doing “low-level knowledge economy work” and soon much more. “Eventually each bot would be able to do the work of a dozen low-level associates. They would not get tired. They would not seek advancement. They would not ask for pay rises. Process legal work would rapidly descend in cost.”
The human part of lawyering would shrink. “To sustain margins a law firm would have to show added value elsewhere, such as in high-level advisory work, effectively using the AI as a production tool that enabled them to retain the loyalty and major work of clients…
“Clients would instead greatly value the human input of the firm’s top partners, especially those that could empathise with the client’s needs and show real understanding and human insight into their problems.”
That estimate may be a bit extreme; Microsoft is currently hoping to advance that type of analytics, but given the start-up costs, teaching protocol, and tech savviness that predictive coding requires it’s unclear when exactly it’ll reach the average lawyer. Still, the future’s bound to get here sometime. How will lawyers react when it does?
2. If a computer codes in a forest…
As one would guess, a major component of the AI field is for computers to learn on their own, thus writing and rewriting their own code to understand what’s happening. So if a computer codes in a forest and no one is there to hear it, who owns the copyright? It’s a question that Kenneth A. Grady believes we’re going to be asking ourselves much more in the future:
As the software does so, who is responsible for what the software does? We also have software interacting with other software, and doing so in ways that humans can’t follow. That is, we can’t reverse engineer what happened when something goes wrong. Who is responsible when something does go wrong? As we let computers make decisions that humans made, and as the computers can do so not by following programs humans wrote but by developing their own programs, what happens to the concept of causality? How do we handle situations where the computer software resides outside the country where the harm occurred? The list of questions is long and the questions are complicated, but we are just beginning to work through what to do.
3. Who watches the AI watchmen?
A Swedish art collective called !Mediengruppe Bitnik has long focused on “post conceptual” pieces that use—or misuse—the web in the name of art. Their latest, is something of a conundrum.
They created a web bot which was appropriately titled Random Darknet Shopper, and did exactly that: once fed a weekly budget of $100 in Bitcoins, it would travel the darknet (the deep, shady part of the Internet that’s the web version of the Silk Road or the Cantina from “Star Wars”), randomly purchase one thing a week, and then mail it to the collective, who then put it on display in Switzerland. The majority of swag brought in was things like knock-off Nikes, but then it brought in a falsified Hungarian passport. Later, it mailed ten tabs of ecstasy from Germany. So who owns contraband ordered by an autonomous computer? And who was responsible for the law-breaking that happened as a result of the randomly-generated AI?
No one’s quite clear on that yet. It will undoubtedly be one of the bigger questions going forward as folks start to experiment with AI and machine-learning gets more and more accessible to the masses. In case anyone’s wondering, though, law enforcement did eventually step in, according to the artists:
On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor’s office of St. Gallen seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We’d also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.
All in all, it’s a lot more leeway than they would’ve likely gotten in the U.S.
It should be noted that though Stephen Hawking and Elon Musk have both said they believe that AI will spell the end of mankind, most estimates have truly ubiquitous AI (the kind science fiction has been warning us about) as a few years away, at least. But lawyers will need to be prepared for when that day inevitably gets here. Because once Google’s Deep Mind wraps its mind around the battle strategy of StarCraft we might not be able to hold off Skynet for long.