The Chronicle of Higher Education recently published an article discussing how artificial intelligence is changing teaching (sub. req’d). The discussion centered around many of the same themes that we see when discussing artificial intelligence in law.

The CHE article asks the common questions: When you’ve got artificial intelligence handling work that is normally done by a human, how does that change the role of the professor? And what is the right balance of technology and teaching? Replace “professor” and “teacher” for “lawyer” and “lawyering,” and you get the idea.

Like the augmenting argument for law, the argument for teaching goes: They automate some of teaching’s routine tasks, so that professors can do what no machine can — challenge and inspire students to gain a deeper understanding of what they’re learning. 

And just like the argument that law will become increasingly reliant on AI raising privacy and ethical concerns, so goes the argument for teaching: But skeptics worry that if education is increasingly reliant on artificial intelligence and automated responses, it will put technology in the driver’s seat and prompt formulaic approaches to learning. Some of the algorithms used in AI-driven tools are built on large data sets of student work, raising privacy and ethical questions. The CHE article also raises issues with the proprietary nature of the algorithms limiting the professors’ understanding of how the tools make decisions.

What we’re seeing here are the common issues that arise when artificially intelligent agents encroach on knowledge work.

On a practical level, the following tools represent examples of AI being in teaching:

  • Adaptive Courseware: With adaptive courseware, students first encounter material outside of class, often through short video lessons and readings. They take quizzes that assess their understanding of the material and, depending on the results, the courseware either advances them to the next lesson or provides supplemental instruction on concepts they don’t yet grasp. Advocates say this lets students study at their own pace and frees up the instructor’s time in class to shore up students’ knowledge or help them apply what they have learned.
  • Packback takes care of basic monitoring, like making sure the students are on topic and are asking open-ended questions that encourage discussion. It prompts students to supply answers that are backed up with sources and to write more in depth. And it uses an algorithm to give a ‘curiosity score’ to each post based on those and other measures. Because everyone can see all the scores, some instructors say students often try harder when writing subsequent posts.
  • Peerceptiv works by evaluating the reviewer, not the writing itself, says Chris Schunn, a professor of psychology, learning sciences and policy, and intelligent systems at the University of Pittsburgh and the principal investigator behind the program. It helps instructors by anonymizing and distributing student work, allowing each writing assignment to be reviewed by several classmates. Then it monitors and graphs student feedback, including feedback on the reviewers. If a student hands out high ratings to every classmate while others are writing more-nuanced evaluations, his rank as a reviewer will drop. If he gives feedback that other students say is helpful, his score rises.

Law professors can and should use these technologies toward effective learning strategies. When using AI in teaching, it would be wonderful if law professors pointed out the similarities to using AI in law. These would provide concrete examples of the pitfalls associated with relying on proprietary algorithms to outsource low-level lawyer functions and provide valuable insight to students as they continue to rely heavily on AI for all aspects of life.

Ultimately, we should all start preparing for this inevitable future with the following caveat (just like in law): Just as long as the instructor remains in charge of the classroom.