Artificial intelligence doesn’t equal artificial perfection. I have argued for a while now both on this blog and in a forthcoming law review article here that lawyers (and the investigators who work for them) have little to fear and much to gain as artificial intelligence gets smarter.

Computers may be able to do a lot more than they used to, but there is so much more information for them to sort through that humans will long be required to pick through the results just as they are now. Right now, we have no quick way to word-search the billions of hours of YouTube videos and podcasts, but that time is coming soon.

The key point is that some AI programs will work better than others, but even the best ones will make mistakes or will only get us so far.

So argues British math professor Hannah Fry in a new book previewed in her recent essay in The Wall Street Journal, here. Fry argues that instead of having blind faith in algorithms and artificial intelligence, the best applications are the ones that we admit work somewhat well but are not perfect, and that require collaboration with human beings.

That’s collaboration, not simply implementation. Who has not been infuriated at the hands of some company, only to complain and be told, “that’s what the computer’s telling me.”

The fault may be less with the computer program than with the dumb company that doesn’t empower its people to work with and override computers that make mistakes at the expense of their customers.

Fry writes that some algorithms do great things – diagnose cancer, catch serial killers and avoid plane crashes. But, beware the modern snake-oil salesman:

Despite a lack of scientific evidence to support such claims, companies are selling algorithms to police forces and governments that can supposedly ‘predict’ whether someone is a terrorist, or a pedophile based on his or her facial characteristics alone. Others insist their algorithms can suggest a change to a single line of a screenplay that will make the movie more profitable at the box office. Matchmaking services insist their algorithm will locate your one true love.

As importantly for lawyers worried about losing their jobs, think about the successful AI applications above. Are we worried that oncologists, homicide detectives and air traffic controllers are endangered occupations? Until there is a cure for cancer, we are not.

We just think these people will be able to do their jobs better with the help of AI.

View Original Source
Photo of Philip Segal Philip Segal

Charles Griffin is headed by Philip Segal, a New York attorney with extensive experience in corporate investigations in the U.S. for AmLaw 100 law firms and Fortune 100 companies. Segal worked previously as a case manager for the James Mintz Group in New York and as North American Partner and General Counsel for GPW, a British business intelligence firm. Prior to becoming an attorney, Segal was the Finance Editor of the Asian Wall Street Journal, and worked as a journalist in five countries over 19 years with a specialization in finance. In 2012, he was named by Lawline as one of the top 40 lawyers furthering legal education.  Segal has also been a guest speaker at Columbia University on investigating complex international financing structures, and taught a seminar on Asian economics as a Freeman Scholar at the University of Indiana.  He is the author of the book, The Art of Fact Investigation: Creative Thinking in the Age of Information Overload (Ignaz Press, 2016). He lectures widely on fact investigation and ethics to bar associations across the United States.