The money part is simple. But how do firms win on culture?
The above 2×2 matrix provides a window on the war for associate-level law firm talent in the London-UK market. The firms in the top-right quadrant are winning. This is true for two reasons. First, they’re paying salaries at or near the top of the market. Second, they’re earning high marks for their culture–what I called a culture of “high performance.”
As a result, these firms–all well-known global brands–are well-positioned to attract and retain A-level talent. Over the long-term, A-level talent is a crucial input for premium pricing and higher profits. See Pisano, “The Hard Truth About Innovative Cultures,” Harv. Bus. Rev. (Jan/Feb 2019).
The purpose of this post is to provide readers with an example of a valid firm culture measure that can support an effective law firm talent strategy. I’m a statistician, not a lawyer. But given my experience working with law firms, I’m certain the first question lawyers will ask is, “how did you measure firm culture?”
The data in Figure 1 are derived from Legal Cheek, a London-based publication that focuses on the London-UK legal market, which includes many US-based global law firms. See Post 082 (discussing Legal Cheek data and using it to analyze the relationship between associate hours and pay). Because its readers are mid-level and junior associates, law firm trainees and lawyers, Legal Cheek conducts an annual survey that evaluates London-UK firms on ten factors.
Legal Cheek’s Insider Scorecard assigns firms a specific letter grade (A+, A, B, C, or D) on the ten factors above for more than 80 leading law firms. The current Insider Scorecard is based on survey responses from more than 2,000 trainees and junior associates.
In the “Not all statistical methods are equal” section below, I explain how these data can be used to construct a valid and useful measure of law firm culture. But first let’s understand the business problem we are trying to solve.
Office Managing Partner
Imagine you’re the London office managing partner of a global law firm and Legal Cheek just published their 2019 report cards. You’re happy that associates and trainees gave your firm an “A+” for Office Perks and Quality of Work. But the firm also got a “C” on Work/Life Balance and “Bs” on everything else. Several direct competitors did better on Training, Technology, Social Life, and Partner Approachability. One got an A on Work/Life Balance. Should you be worried?
To compound matters, various key stakeholders at your firm have strong opinions. How do you respond when partners and the Director of Recruiting send you emails focused on the one or two factors they care about? Assuming you build a strategy to improve the firm’s scores, how can you be confident the benefits are not outweighed by the costs of time, money, and overall distraction? What kind of accounting can you give leaders in the home office?
I have empathy for lawyer managers who face these questions. They are responsible for the performance of their office yet generally lack the time, tools, and training to devise an evidence-based strategy that is likely to work.
To boil it down, this office managing partner needs to know which cultural factors are most influential with associates and trainees so that, if those factors are high, we can be confident the overall marketplace buzz on the firm’s culture is positive vis-a-vis the firm’s competition. Fortunately, statistics can identify such factors with unparalleled precision.
Not all statistical methods are equal
Let’s start with what is unlikely to be effective. It would be a bad idea to set a London-UK talent strategy based on comparing firms on the average of the ten Legal Cheek factors. Why? Some factors are bound to be more influential than others. For example, when it comes to attracting and retaining lawyers, how important are Office Perks versus Training versus Work/Life Balance? Cf. Porter, “What is Strategy?,” Harv. Bus. Rev. (Nov/Dec 1996) at 70 (observing that “the essence of strategy is choosing what not to do”).
Another bad idea is to listen to the lawyer who spins the most compelling narrative about why a particular factor matters much more than the others. We can do better than speculation and “anecdata.” We live in a world with actual data.
The high performance culture measure in Figure 1 was created using a statistical method called Item Response Theory (IRT). When we apply IRT to our managing partner’s problem, we assume that firm culture manifests itself in a number of observable and concrete patterns, or “culture indicators.” Thinking like a social scientist, we can say firm culture is a “latent variable”–an unobserved concept whose real-world manifestations provide a way to measure it anyway.
Many readers will prefer concreteness to abstraction. But building an information-rich culture metric requires using the indicators simultaneously. To borrow terms from my graduate school advisor and mentor, Jim Stimson, we engage in “lumping” rather than “splitting.” The hope is that, by the article’s conclusion, readers will agree that being a “lumper” is worth it.
Firm culture components
As lumpers we can generate a large volume of clean and simple numerical insights. We’ll focus on two in particular: (1) which indicators are most important to firm culture, and (2) what a quantified culture score can tell lawyer-managers about their current prospects in the war for talent.
For the nine culture indicators (Canteen is excluded due to missing data)–how well do they isolate high-performing firm cultures? In the educational testing field in which IRT was invented, the answer to this question centers on an indicator’s “discrimination” value. A culture indicator with a high discrimination value is, in effect, data that’s very revealing.
For example, consider the need to measure overall legal ability in law school. A course that has a high discrimination value might be Federal Jurisdiction–earning an “A” clearly signals strong legal ability. Conversely, a course that has a low discrimination value might be Law and Cinema. So in measuring overall legal ability, the grade in Federal Jurisdiction will matter more than the grade in Law and Cinema.
On high discrimination culture factors, firms receiving a favorable grade (A+, A) are very likely to have high-performing cultures and vice versa. Figure 2 below reports discrimination values for the nine factors. The higher the number, the more important the factor is for firm culture. Therefore, Training and Social Life are very important to associates. Peer Support and Quality of Work also have high importance. On the other end, the score for Work/Life Balance is exceptionally low. From the data, we learn that it’s not much of a contributing factor. The Office and Office Perks are also largely inconsequential.
Just like statistical methods, not all culture factors are equal. Training and Social Life play the biggest defining roles according to associates. So as the London managing partner, you now have powerful information to guide an effective talent strategy. (Contrast this against an approach that takes the average over the nine indicators, and where by design, firm culture is composed of nine equally-important factors.)
Evaluating market position
Figure 1 at the top of the page crosses the culture scores (horizontal axis) and each firm’s starting salary for a newly-qualified UK attorney (vertical axis). Using this grid it is straightforward to evaluate firms’ market positions. The orange lines denote the average values of the culture scores and salaries. Each dot reflects a specific firm’s position in the 2×2 grid.
Named firms in the figure offer associates the total package (money and culture) in the London-UK market. For example, Kirkland’s culture score puts it with the other culture high-performers. Combined with a willingness to pay top salary, we conclude that–relative to other London-UK market firms–Kirkland is well-positioned to win the talent war. Ropes & Gray also delivers a total package, albeit with a starting salary somewhat lower than Kirkland’s.
A visual like Figure 1 is the statistician’s stock-and-trade. By working with firms, however, I’ve learned that lawyers often want a succinct and more literal presentation. Cf. Post 008 (discussing the importance of cultural compatibility in adoption of innovations). Let’s translate the culture results back into the familiar scorecard. Figure 3 below integrates the culture scores (listed in percentile terms) with the Legal Cheek factor grades for Kirkland and Ropes & Gray. It also reports scores and grades for two firms that are in the competitive set but have lower performing cultures.
With Figure 3 we can unpack why Ropes & Gray and Kirkland have superior cultures. First, they do very well on the most important factors–Training, Social Life, and Peer Support. On these items, Competitor Firm A and B (both elite US-based law firms that compete in the London market) only manage a single “A” between them. These strong grades are enough to distinguish Ropes & Gray and Kirkland even though at all four firms, the associates and trainees say they have A-level or better Quality of Work.
Also note the grade similarity on Work/Life Balance across the firms. With the exception of Competitor Firm A, Work/Life Balance is a C-level offering or worse. Yet, recall that Work/Life Balance has low discrimination as an indicator of culture. As a result, it’s possible that uniformly bad Work/Life Balance grades can coexist with these firms’ divergent professional cultures.
The scorecard that includes the culture score is powerful as an instrument of persuasion. It clarifies the value of assigning model-derived weights to the nine factors. In general, it shows how rigorous analytics can be made simple, thus helping lawyer-managers develop an effective strategy and position their firm to win the talent war.
The idea to write this article crystallized about a month ago while I was attending the inspire.legal “unconference” organized by Christian Lang. See Post 083 and Post 084 (describing what made the inspire.legal event a unique professional gathering). I concluded then that an under-appreciated need in driving innovation is having experienced analysts who can educate others about the creative possibilities that exist using data science. Accordingly, the above reflects an initial effort to share some “art of the possible” in data-driven firm management.
Here were some of my key takeaways from the inspire.legal event:
- Demand for data analytics in law is palpable.
- When lawyers and legal professionals draw on their experiences, they can articulate compelling accounts about their needs and problems.
- Lawyers and legal professionals are struggling to convert their experiential knowledge into analytics-based solutions.
- Among a self-selected group of innovators and early adopters, there is lingering skepticism about the ability of data analytics to solve firms’ challenges.
I owe a debt to Christian and the inspire.legal attendees because these takeaways do inspire me. They shine light on the need for analysts to serve as educators–broadening awareness about data science techniques that enable people like our London managing partner to quantify and act on experiential knowledge effectively.
This look at some of the “art of the possible” lays the groundwork for establishing key principles for using analytics. Among the most important are that (1) often, you have to assume something to learn something, and (2) there’s value in “lumping” data and learning from model-based abstractions (provided that these quantitative abstractions have rigorous foundations).
A third thing to remember is that, just because you’re using data, it doesn’t mean you’ll make different decisions. Based on the results above, the London-UK firms that fall in the winner’s quadrant just got confirmation that their management strategy is working. Bolstered confidence to stay the course is as valuable as any data that recommends a new direction.
Finally a reminder–now widely-understood–that our judgments and decisions are strongly shaped by prior beliefs. Using your priors (“your gut”) can be powerful, but cognitive biases often lead us astray–especially if we act on prior beliefs unquestioningly. An analysis like the above provides a framework for integrating prior beliefs and data. This way of thinking can mitigate bias and improve decision making. It’s called “Bayesian updating,” and it’s a good mental model to remember as we increasingly move to inform decisions using data.