Let’s start with why.

Paladin’s mission is more & better pro bono. The part of that mission we’ve been focused on for the past year is building the network and the infrastructure to support that network. To move pro bono opportunities efficiently between legal service providers and law firms with lawyers that have capacity and passion to take on said opportunities.

That’s a lot of work, and we’re far from done. But it is working. Paladin has already placed thousands of hours of pro bono work with both law firms and large corporations, and we’re just getting started. But that infrastructure isn’t what this project was about. This project is about: once you have the infrastructure in place, how do you measure it?

As a designer, I take the stance of “I know nothing”. That’s not true, obviously, but when it comes to how to measure the efficacy or impact of a pro bono program, there are many more knowledgeable people than myself, people who’ve been doing this for decades. At Paladin, we’re extremely fortunate to know some of the best of them. So despite my knowing nothing, as a designer, I do know how to glean insight out of the brains of extremely smart and busy people in a way that’s not just useful, but unexpected, and dare I say, sometimes fun. And if you didn’t see it coming — which would be weird, since it’s in the title — the way we did that was by running a design sprint.

Quick “what is a design sprint“ aside

So you’ve never read a medium post about running a design sprint? Cool, this’ll only take a moment.

What’s a design sprint?

A design sprint helps define a problem, come up with lots of ideas to solve that problem, then test which of those might work. Quickly.

A design sprint, popularized and codified and book-written-by’d Google Ventures and Jake Knapp, typically takes place over 5 very busy days with a core group of builders, technologists, and domain experts. Over those ~five days your team:

  1. Maps out the problem space and agrees on a goal.
  2. Sketches and storyboards ideas.
  3. Builds a prototype designed to answer specific validation questions.
  4. And finally, validates whether your proposed solutions seem to be on the right path.
We didn’t use all those interview sessions… but we used most of those interview sessions. 🤯

(Here’s a video from the team at AJ&Smart on their updated design sprint approach, which we took a lot of inspiration from — thanks AJ&Smart!)

The group performing a design sprint is often a client team and a design or dev shop, sometimes it’s a purely internal team talking to customers; for this sprint, we took something of a hybrid approach. We’re building for current and potential customers, who happen to be the domain experts. So we engaged them at the very beginning of the process (the phase commonly known as “Ask the Experts”) but then used that same group at the end of the process to validate the prototype we created.

90% of a design sprint is just staring at stickies on walls. True story.

Another place we changed the process was having our panel of participants — pro bono partners and counsel from leading law firms (we’ll call them pro bono experts or PBEs here on out)– help us define our sprint goal. This was important to us because as current and prospective clients, we want not just the knowledge in their brains, but the buy-in… in their hearts 💖. Paladin is a product the pro bono community is creating with us, so it’s important they know that they have influence over the direction of the sprint and eventual product. After all, what good is a product if it’s not going to solve the problems we’re told are most relevant? After their feedback, our sprint goal became:

Firm Pro Bono Counsel and Legal Service Organizations/Clearinghouses will have the data and stories to:

  1. Strengthen their pro bono programs,
  2. Develop and empower their lawyers to do more and better pro bono work, and
  3. Achieve better client outcomes.

On to the juicy bits.

Juicy bits, part 1: The input

Mapping the problem space

Overall, we had a dozen PBEs broken out into 4 initial groups over the course of day one. Each of them also participated at the end of the sprint week to validate the prototype, and we also ran a survey at this year’s ABA Equal Justice Conference.

We began by asking each group of participants questions to hear their insight about their programs at both a high-level (strategic), and at a low level (tactical).

  • What is the most important thing to measure and why?
  • How do you measure the impact your program is having?
  • What can’t you currently measure — what’s the holy grail?
  • What do you report on that you wish you didn’t have to? What report is just a proxy for something else?
  • What is not important to capture, what information do you have that you don’t use?
  • What qualitative information is most important to capture?
  • How might you report without tracking hours?
  • How do you digest this info? Are you presenting it elsewhere? To whom and in what format? What do they want to know?

I’m sure you’ll notice a lot of these questions are very similar, and that’s on purpose. The goal is to find ways to get answers that are deeper than just what’s top of mind or reflexive.

Based on the answers of the groups, we created a grid of metrics they described as important to their programs. We then gave each participant 5 votes — everyone wanted more, of course — with the intent to:

  • Prioritize for themselves: Our primary goal was to find out what metrics each participant felt was most useful to them and their program.
  • Be transparent about tradeoffs: We also wanted to manage expectations and have participants have ownership over our prioritization process.
  • Show convergent desires in real time: Lastly, most of our participants don’t normally go through this type of process — especially not together — and by seeing where their needs converge or diverge with one another, they can better understand what tools are most missing across the industry, and how they’re working differently.
We used Miro (formerly Realtimeboard) to run these sessions.

Below is how the results quantify across both our PBE panel the audience at EJC.

So, what does all that mean? We found four key themes.

  1. Hours matter: “I cannot stress enough the importance of hours.” Unsurprisingly, all participants felt the most necessary way to measure their program was by measuring any data by hours. Even when pushed or presented with alternatives, this is de-facto how work is measured.
  2. Questions are unpredictable: There are some metrics that our participants go to often (engagement by office, hours by practice area/community served), but generally, the questions they need to answer for their superiors, for partner legal service organizations, or for their own knowledge are always changing.
  3. Attorney satisfaction & development: A major point of interest was the ability to quantify and be alerted to how attorneys are doing. This was both regarding satisfaction (with the pro bono opportunity, with their job overall), and skill development.
  4. Client outcomes: The “holy grail” as described by many participants is having ~real time, quantifiable insight into the actual outcomes for the pro bono client. This was generally perceived as impossible to get, but would be “game-changing”.

Juicy bits 2, electric bugaloo: The prototype

Ideating potential solutions

Based on those key themes and the focus areas identified by our participants, we got to work sketching and storyboarding solutions.

Obligatory process sketches! What could they become?! Indistinguishable from any other design post!

Quick design curmudgeon aside: Sketches are great and an important part of the process, but they don’t really communicate much in a post or portfolio.

The main takeaway from this part of the process that I share with all my teams (and have stolen from working with Redesign.Health’s Adam Brodowski from his MadebyMany days — thanks Adam!):

The point of a validation prototype is not to try to come up with the most correct solution; it’s to push the participant to have strong reactions to what’s being presented, so they can share why it or any other possible solution would best solve their needs.

Another way I’ve heard this put: If you want a client to draw a square, but they’re reluctant to do so themselves, start drawing a circle for them and have them correct you about how you’re wrong.

Another another way

It’s all about eliciting the most honest insight.

End of curmudgeonly aside.

A sample screen from the Paladin reporting sprint prototype

The moment you’ve all been waiting for: this is the prototype we built and shared with our participants to gauge how best to measure the impact of their pro bono programs. Take a moment, click around, we’ll be here when you get back.

Hi. Great, right? I know. Again, the goal of a prototype is not to create the absolute most correct solution — we don’t know that yet! But to elicit strong reactions. For example: the second screen, “Reporting Dashboard,” is never something we’d actually build. But the two paths it creates helped us gauge the usefulness of two very different types of information.

We found those two paths became the big prioritization question: “what is more important: hourly participation data or attorney development & outcome data”?

The reason for this prioritization question is twofold:

  1. Hourly participation data requires connecting with firm partners’ existing systems. Attorney & outcome data does not, making it far quicker to build.
  2. Hourly participation data is the current paradigm for measurement, as it’s the data Pro Bono Counsel currently have and it’s the data by which firms are judged externally (e.g. AMLAW)

Juicy bits 3, juicy with a vengeance: The results

Validating which ideas have merit

Results were split, but conclusive.

When asking participants to prioritize between these groups, it was an even split, best captured as one participant described:

“The metrics that best help to strengthen my pro bono program: outcome and satisfaction data. When push comes to shove, the thing I need first is hourly participation data.”

Half of our participants prioritized hourly data, the other half prioritized attorney & outcome data.

Participants who already have direct access to hourly data (meaning, they don’t have to ask someone else to pull a report for them) prioritized the data they didn’t have (attorney & outcome data)– but would not be willing to give up the hourly data in order to get the attorney & outcome data.

We describe this split as “excitement” vs. “critical need”.

Excitement

In the end, we proved that quantifying the data our participants didn’t have was extremely exciting. Far and away the most well received aspect of the prototype was the attorney development & outcome data exit survey.

All participants immediately had ideas on how they’d use this data to strengthen their programs.

Critical Need

But, the information each participant needs day-to-day to do their jobs well is gotten by “slicing & dicing” hourly participation data.

Providing an interface to look deeply into data was well received, but did not appear to solve problems drastically more effectively than simply giving access to the data with simple but intuitive ways to create reports (offices, practice groups, community served seemed to be the most common categories requested).

Most participants were comfortable slicing & dicing the data themselves in Excel or having their staff do it. The true pain point was getting reports from elsewhere (e.g. their finance dept. or legal service partners) and having to manually combine and parse. Giving direct access begins to solve this problem.

A True Opportunity

Though it’s clear reporting functionality that integrates with existing firm software is the priority, we found there’s a true opportunity to help Pro Bono Counsel strengthen their programs in new ways by providing the framework and interface for capturing Attorney Satisfaction & Outcome data.

Though this shouldn’t come first, it’d be a missed opportunity to put it off for long.

Post Sprint: Now What?

Now comes the fun part. A design sprint is a great way to get signal that you’re on the wrong or right track. But the actual iterative design and development work needs to follow it up. As we continue to build out the network for pro bono, we’ll begin building out ways to report on that network so Pro Bono Counsel and Legal Service Organizations can strengthen their programs, develop their lawyers, and achieve better outcomes for their clients.

If you’d like to get involved — as a client, a pro bono champion, or maybe even to join our team, give us a shout.

Thanks for reading.


Running a Legaltech Design Sprint with a Dozen Pro Bono Counsel Across Two Continents. was originally published in Paladin on Medium, where people are continuing the conversation by highlighting and responding to this story.