L
E
G
A
L
L
E
G
A
L
LEADERS EXPLORING GENERATIVE AI IN LAW
LEADERS EXPLORING GENERATIVE AI IN LAW
Client Survey Annotations
Moving the Needle with GenAI
This survey explores how GenAI is reshaping the economics of legal service delivery: how work is sourced, structured, or priced.
We intentionally exclude experimental activity to focus on initiatives with visible commercial impact—budget decisions, shifting work between providers, or changing the way fees are set and justified. In short, we’re looking for instances where GenAI has become a factor in your sourcing decisions or economic outcomes. Our objective is to surface where GenAI is moving from theoretical promise to commercial reality.
- Reducing external spend through automation or insourcing;
- Capturing more value—a higher yield for your money—from existing providers;
- New pricing approaches tied to GenAI-enabled efficiency; or
- Adjusting your sourcing mix based on a provider’s GenAI capabilities.
We recognize that commercial decisions are complex. For our purposes, GenAI need not be the sole driver of change—only that you regard it as a meaningful factor in your commercial decisions or clear contributor to economic outcomes.
L.E.G.A.L. Client Survey responses are used only in de-identified and/or aggregated form and are not shared in a client-attributable way.
ANNOTATION: This survey is designed to identify where GenAI is beginning to change the economics of legal service delivery from the client side—how it influences budget, sourcing decisions, and the overall balance between internal and external legal resources.
We know most law departments are still early in this journey. Many GenAI efforts remain exploratory or internally focused. That’s expected. Because we remain early days, it is exceptionally valuable to the collective conversation to surface where GenAI has started to make a visible difference in how work is distributed, priced, or valued—even if modest or emerging.
We’re not seeking to capture every experiment or workflow adjustment. Our focus is where GenAI has become a factor in economic outcomes—for example, where it has:
- Reduced external spend by enabling automation or insourcing;
- Helped you achieve “more for the same” through improved cycle times or quality; or
- Supported changes in how providers are selected, priced, or managed.
It’s understood that these shifts are gradual and often driven by multiple forces—budget pressure, regulatory complexity, leadership priorities, technology maturity. GenAI doesn’t need to be the only cause, only a meaningful contributor. If GenAI is even part of the story, that’s valuable insight.
Economic outcomes are a lagging indicator. We’re therefore also interested in where GenAI hasn’t yet moved the needle. In some organizations, GenAI is still viewed as an efficiency enabler without a direct line to financial outcomes. Capturing that reality helps benchmark the industry’s true point of evolution—where potential still exceeds commercial translation.
In short, we’re seeking your perspective on how, if at all, GenAI is starting to register as an economic signal—in how you budget, source, and measure value across legal service portfolios.
1. Postures. Which of the following best characterizes your broader organization’s GenAI adoption pattern? Your law department’s? And your preference for your external providers?
Our organization’s posture toward GenAI is [ Dropdown ]
Our law department’s posture toward GenAI is [ Dropdown ]
We prefer our external providers’ posture toward GenAI be [ Dropdown ]
Dropdown Options
Wait & See (intentionally avoiding deep engagement until value and safety clearly proven)
Preliminary Engagement (limited, informal exploration or discussion underway)
Initial Integration (incorporating GenAI into select workflows)
All Deliberate Speed (pursuing scaled deployment as quickly as prudence permits)
All In (moving aggressively despite heightened risk of rework or disruption)
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
Broader organization’s posture: Reflects the overarching risk tolerance and digital appetite within your company—the tone from the top. This sets the operating environment for all internal functions, including . For example, a company that is “All In” on AI experimentation will implicitly expect the law department to facilitate, not frustrate, that ambition. Conversely, a company in “Wait and See” mode may expect the law department to lead with caution and control.
Law department’s posture: Shows how your department interprets and operationalizes that corporate context. Some departments mirror the enterprise stance; others act as counterweights, slowing things down or carving out exceptions. Your response indicates whether you’re following, moderating, or leading within your organization. “Initial Integration” and “All Deliberate Speed” often signal departments that are moving from internal proofs of concept toward real, embedded workflows.
Preferred provider posture: Reveals what you want from the ecosystem around you. Provider readiness often determines the pace at which clients can translate GenAI into measurable value. If your department is more advanced than your firm generally, you may prefer that the firm accelerates. If your own footing is tentative, you may prefer that the firm proceeds cautiously to reduce risk. Capturing your preference helps illuminate alignment—or misalignment—between client demand and provider supply.
Posture is not maturity: Maturity involves institutionalization—policies, governance, performance measures. Posture captures intent and trajectory: how aggressively or conservatively an organization is leaning into GenAI. It is a leading indicator of readiness to adapt, resource, and scale. By triangulating these three perspectives (corporate, departmental, provider), we can better understand where enthusiasm, caution, and capability intersect—and where they don’t.
The five-tier scale was intentionally crafted to be intuitive and nonjudgmental. Each label conveys pace and posture without implying virtue or deficiency. “All Deliberate Speed,” for instance, recognizes that moving fast and being prudent are not opposites. “Preliminary Engagement” distinguishes between those still exploring and those already integrating. “All In” acknowledges a small but important cohort taking calculated risks to gain first-mover advantage.
Our intent is to establish a foundation for comparative insight. When analyzed across both clients and providers, posture data shows whether commercial partners are evolving in sync—or whether gaps in tempo and trust are forming. Those gaps often explain downstream friction in pricing, procurement, and engagement.
Put simply, this question tells us where everyone is on the map—and how closely they’re walking together.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
2. Commercial Direction. Which one of the following scenarios do you expect to be closest to the commercial direction of the legal market for providers like you over the next 36 months?
|
Demand Down |
Demand Flat |
Demand Up |
|
|---|---|---|---|
|
Economics Unchanged |
Demand Down - Economics Unchanged |
Demand Flat - Economics Unchanged |
Demand Up - Economics Unchanged |
|
Economics Transformed |
Demand Down - Economics Transformed |
Demand Flat - Economics Transformed |
Demand Up - Economics Transformed |
Dropdown Options
Demand Down / Economics Unchanged (Demand for legal services contracts, but the fundamental business model remains intact)
Demand Down / Economics Transformed (GenAI reduces demand for legal services while also forcing fundamental change in market economics)
Demand Flat / Economics Unchanged (On net, market demand neither grows nor contracts; GenAI absorbed within existing economic structures;)
Demand Flat / Economics Transformed (Demand stays roughly level, but how market functions changes significantly)
Demand Up / Economics Unchanged (Demand for legal services grows, but traditional delivery models persist)
Demand Up / Economics Transformed (GenAI both enlarges the market and reshapes its economics)
ANNOTATION: This question is designed to situate GenAI within the bigger picture of market economics. We are asking you to forecast not just your own business but the overall commercial direction of the legal market over the next three years.
The structure forces a choice across two axes:
- Demand for legal services → rising, falling, or flat.
- Economics of delivery → unchanged vs. transformed.
That yields six distinct combinations, each carrying a different implication:
Demand Down / Economics Unchanged → Here, overall demand for external legal services contracts, but the fundamental delivery model remains intact.
Demand Down / Economics Transformed → A “disruption” scenario. GenAI reduces demand for traditional services (via insourcing, automation, disintermediation), while also forcing fundamental change in provider economics—staffing, leverage, and pricing models all shift.
Demand Flat / Economics Unchanged → A conservative baseline. GenAI adoption produces efficiency gains that are absorbed within existing structures. Providers continue to operate under familiar economic models, and market demand neither grows nor contracts in a material way.
Demand Flat / Economics Transformed → A stability-plus-disruption mix. Demand for services stays roughly level, but how providers deliver those services changes significantly (e.g., more technology, different staffing, new fee structures). The pie is the same size, but it is baked differently.
Demand Up / Economics Unchanged → An “expansion without disruption” scenario. Demand for legal services grows (perhaps due to regulatory complexity, risk, or new legal domains), but GenAI efficiency gains are captured within traditional delivery models. Economics improve incrementally but not fundamentally.
Demand Up / Economics Transformed → The most expansive view. GenAI both enlarges the market (by creating new categories of work or lowering costs that stimulate demand) and reshapes its economics (new players, new pricing, new workforce models). This is the high-change, high-growth scenario.
We recognize that reality will not be uniform across the industry—different segments, geographies, and practices will move differently. Still, we ask you to select the combination that best reflects your organization’s baseline expectation for the market as a whole.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional comment box should you feel overly constrained by the structured responses.
3. Impact. What is your perspective on how the broader automation push surrounding GenAI has affected, and will affect, your law department? How might increased automation proportionally, or not, enhance your capacity to meet demand? How does the resulting pressure on you cascade, or not, to your external providers?
Total Demand is the demand placed on the law department to meet the broader organization’s substantive legal needs. In one world, demand is down due to upstream automation (business stakeholders rely on AI rather than the law department). In another world, demand is up due to increased business velocity, ratcheting expectations, and/or legal complexity (legal issues raised by GenAI and the accompanying compliance burden). In between, demand could be flat because GenAI has no impact or because the demand impacts offset.
Total Legal Spend is total fiscal resources allocated to the law department accounting for the interplay of the volume of legal work, the automation of legal work, and organizational perspectives/perceptions/politics.
Internal Legal Spend is total fiscal resources allocated to the law department’s own delivery of legal services to meet the broader organization’s substantive legal needs. It includes spend on technology.
Internal Tech Share is the share of internal legal spend directed toward technology. For our purposes, it is “internal” as long as it hits the law department’s budget—even if the technology is licensed from an external vendor. It is a ratio, not a raw number. The share (the percentage of budget) could increase even if budget decreases, and vice versa.
External Legal Spend is the fiscal resources allocated to external providers to meet the broader organization’s substantive legal needs.
External Tech Share is the share of external legal spend directed by providers toward technology. It is a ratio, not a raw number. The share (the percentage of budget) could increase even if budget decreases, and vice versa. That is, do you think providers will be spending more/less on tech, as a percentage of budget?
Again, the question is about the impact of GenAI, including the attendant expectations and emphasis on automation. Separating out other factors (e.g., demand growth due to other drivers), how does the perceived inflection point around GenAI affect, or not, your law department?
|
To Date |
Next 12 Months |
Next 36 Months |
|
|---|---|---|---|
| Total Demand | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
| Total Legal Spend | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
| Internal Legal Spend | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
| Internal Tech Share | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
| External Legal Spend | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
| External Tech Share | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
Dropdown Options
Heavy Increase = >30%
Modest Increase = 10–30%
Light Increase = <10%
Flat
Light Decrease = <10%
Modest Decrease = 10–30%
Heavy Decrease = >30%
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question explores how your law department perceives GenAI’s commercial and operational impact—how automation and efficiency expectations are beginning to influence both your internal dynamics and your relationships with external providers.
We recognize that no one—not your law department, not your providers, not us—can quantify the full impact of GenAI with precision. The technology, the expectations, and the economics are all evolving faster than anyone’s data can keep up with. That’s exactly why we’re asking for your best directional view rather than perfect data. We’re mapping perception under uncertainty. We’re asking how you perceive and project the curve bending: where GenAI and automation pressures are starting to move the numbers—and where they’re not.
That is, we’re asking you to share your working view of how GenAI and automation pressures are beginning to influence your department’s economics—internally (how you allocate resources) and externally (how you source and manage providers). It’s not about financial reporting accuracy; it’s about trajectory. Are the lines you watch most often—demand, spend, technology share—bending up, trending down, or staying roughly level?
Every law department is already making decisions under uncertainty. Budgets are being revised, pilot projects funded, work reallocated—not because the data are definitive, but because leaders must act anyway. Those decisions are informed by perception, pressure, and professional judgment, not omniscience. This question captures those guiding assumptions, because they reveal how the market is moving before the metrics can.
This question parallels the Provider Survey, which asks firms to assess these same dimensions from their vantage point. By capturing both perspectives, we can compare how law departments think their economics are changing with how providers think client economics are changing. That comparison helps identify where perception diverges from reality—and how misalignment in expectations might be shaping behavior on both sides of the market.
The table format—spanning past, near-term, and medium-term horizons across six dimensions—forces visibility into directional thinking. It lets us see how your internal dynamics are evolving (e.g., internal tech share rising while external spend falls).
This design intentionally balances structure with flexibility: You can anchor responses to observation where data exist (“to date”) and to informed judgment where they don’t (“next 12 months” and “next 36 months”).
We expect you to be uncertain. We designed for that. We’re not looking for precision. We’re measuring perception. Your perception is the data. Even directional estimates help us see where GenAI is beginning to register as a real factor in how law departments plan, budget, and measure their capacity to deliver.
This question is not about getting the answer “right.” It’s about documenting what you currently see, sense, or suspect about GenAI’s economic impact—on your workload, your budgets, and your ecosystem of providers.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
4. Pressure. How would you characterize the budgetary pressure your law department is under (and expects to be under) to deliver GenAI-related economic gains while continuing to meet business needs at scale and pace? How would you characterize the corresponding commercial pressure you’ve applied (and you expect to apply going forward) to your external providers on scaling their delivery of your legal work?
|
To Date |
Next 12 Months |
Next 36 Months |
|
|---|---|---|---|
|
Internal GenAI-driven pressure your law department is under (expectation that GenAI will enable meeting greater demand with fewer resources, on a relative basis; improved ratio of legal spend to legal outputs/business outcomes) |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
GenAI-related pressure you are applying to your external providers (expectation that GenAI will enable meeting greater demand with fewer resources on a relative basis; improved ratio of expenditures to legal outputs/business outcomes) |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
Dropdown Options
Negative Pressure (explicitly bar or discourage GenAI)
No Pressure (GenAI not a factor driving expectations)
Talking Stage (GenAI raised but talk has no real-world impact)
Light Pressure (GenAI has a small impact on internal/external resource allocation decisions)
Moderate Pressure (GenAI is a clear, consistent factor in internal/external resource allocation decisions)
Heavy Pressure (GenAI is a significant driver of internal/external resource allocation decisions)
ANNOTATION: This question traces a simple sequence: pressure on you → pressure from you. Pressure is a leading indicator. It often shows up before budgets, policies, or metrics catch up. Time horizons (to date/next 12 months/next 36 months) let you differentiate the current atmosphere from where you think it’s headed.
First, we ask about internal pressure your department feels to realize GenAI-related gains. Then we ask how that pressure is converted into external pressure you place on providers—through selection, scoping, pricing, and performance expectations.
Here, we’re mapping expectation and transmission, not maturity and outcomes (later). This structure mirrors the Provider Survey’s perspective so we can compare client-reported pressure applied to the pressure providers feel, and spot misalignment.
Again, we know you’re operating under uncertainty and ambiguity. That’s fine. We’re asking for your felt experience—how strongly expectation is shaping decisions now, and where you sense the curve is bending over the next year and three years.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
5. External Spend. Roughly, what share of your external legal spend has been—and may soon be—economically affected by GenAI? For our purposes, “economically affected” means that GenAI (or related automation/expectations) influences how you allocate external budget, which providers receive work, or how matters are priced and structured. Please estimate across the three time horizons below. We recognize these are directional.
Dropdown Options
None = 0%
Minimal = 1–10%
Material = 11–25%
Meaningful = 26–50%
Majority = 51–75%
Most = 76–100%
N/A – cannot approximate with any adequate degree of confidence
|
To Date |
Next 12 Months |
Next 36 Months |
|
|---|---|---|---|
|
Overall External Spend Affected by GenAI |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: We’re asking for a broad, directional estimate of where GenAI has started to register commercially.
“Affected” doesn’t mean completely transformed. It means GenAI has played a recognizable role—even if partial or indirect—in decisions such as:
- Reallocating work between providers (rewarding those demonstrating credible GenAI capability)
- Insourcing work previously sent out
- Adjusting pricing or scoping because GenAI-enabled delivery has changed cost structures
- Redirecting budget toward new technologies or provider types
You’re not expected to have granular data, especially about the future. Approximate by feel: What portion of your external spend has moved, or plausibly will move, in response to GenAI?
This question deliberately tests where rhetoric meets reality. For more than a decade, corporate clients have spoken passionately about innovation, efficiency, and “value.” They’ve cited technology and alternative fees as catalysts for change. Yet when we look at actual spend patterns, most external allocations remain stubbornly stable. Providers report new expectations but see little measurable redistribution of work. Firms raise rack rates, offer larger nominal discounts, and the cycle repeats. The rhetoric of innovation remains high, but the real economics often move very little. This question asks whether GenAI marks a departure from that history—or more of the same.
That’s the context for this question. It’s not about what we wish would happen. It’s about what is, or will be, actually moving money in meaningful ways. If GenAI truly represents a new inflection point—if budgets, allocations, or fee models are shifting because of it—that signal should start to appear here. If not, that absence is equally telling.
By capturing both client and provider perspectives on this same question, we can see whether this moment is genuinely different or another round of well-intentioned noise.
We’re not asking for perfection, just perception. We recognize these are estimates, under conditions of increasing uncertainty.
How much of your external spend has been touched—budget reallocation, work movement, or pricing recalibration—because GenAI entered the equation? If your honest answer is “None yet,” that’s valuable. If your answer is “It’s starting to appear around the edges,” that’s valuable too. We’re not testing optimism or enthusiasm. We’re measuring traction—how much of the talk is translating into economic movement.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
6. Commercial Levers. For each time period below, please identify the primary commercial lever your law department relies on to realize economic benefits from the use of GenAI in delivering legal services. Select one primary commercial lever per time period. After identifying the primary lever for each period, please rate the actual or expected impact of the remaining levers (i.e., use “PRIMARY” once per column).
Dropdown Options
No Impact (no GenAI-related economic impact)
Limited (relied on occasionally resulting in modest economic impact)
Significant (occurs at sufficient frequency or magnitude to have noticeable economic impact)
Transformational (reshapes commercial dynamics)
PRIMARY (most impactful commercial lever)
|
To Date |
Next 12 Months |
Next 36 Months |
|
|---|---|---|---|
|
Fewer Hours Paid For |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Lower Hourly Rates |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
New Fee Structures |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Shifting Work Among External Providers |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
New (to you) Traditional Providers |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
New Provider Types |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Insourcing |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question gets specific. The previous questions asked if GenAI is moving money; this one asks how. We’re prompting you to assess the commercial pathways through which GenAI may generate economic benefit for your organization, directly or indirectly, across three time horizons (to date, next 12 months, next 36 months).
We ask you first to identify your primary commercial lever for each of the three time periods covered in the question, and then to assess the actual or expected impact of the remaining levers. By “primary commercial lever,” we mean the lever that, in your judgment, has been (or will be) the most influential driver of your GenAI-related commercial posture during that specific period.
Your primary lever may change over time. One lever may have shaped your commercial posture to date, another may be expected to dominate over the next 12 months, and a different one may become most important over the longer 36-month horizon. Alternatively, you may believe the same lever will remain primary throughout. All of these patterns are plausible—the intent is simply to capture how your priorities evolve as your organization, your providers, and the GenAI landscape mature.
Selecting a primary lever does not suggest the others are unimportant. It distinguishes the lever that sits at the center of your commercial posture during a given period from those that play supporting roles. After identifying the primary lever for each time frame, you are asked to rate the expected impact of the remaining levers to provide a more complete picture of how your overall commercial approach operates.
This structure is designed to surface specificity: which lever leads, which follow, and how that leadership may shift as capabilities and expectations develop. No single pattern is “correct.” The goal is to capture how clients are sequencing and weighting the mechanisms through which GenAI influences the commercial dimensions of legal service delivery.
Each “lever” is a distinct way GenAI could reshape the financial relationship between clients and providers—a tangible mechanism through which GenAI could translate into economic impact. We’re asking how much these forces have mattered so far, and how much you expect them to matter going forward.
This question measures mechanisms, not mood. Clients often describe broad ambitions—efficiency, innovation, “value.” Those are goals, not methods. An important follow-up is therefore: Which levers do you actually use to achieve these goals?
Historically, the legal market has been inundated with rhetoric about change but has experienced relatively few commercial behaviors that truly alter the flow of money. GenAI might be different, or it might not. This question therefore moves beyond aspirations and into mechanics. If GenAI is genuinely altering the system, it will be visible here—in what clients stop buying, start buying, or price differently.
In that sense, we’re asking: “If GenAI is moving the economics of legal work, how is that movement happening?”
If the answer is “It isn’t yet,” that’s perfectly valid—and extremely useful. It helps identify whether GenAI is, at present, more of a conversation than a concrete commercial reality. The time horizon structure (to date, next 12 months, next 36 months) recognizes that transformation happens gradually. We expect uneven answers—maybe “no impact” today but “significant” or “transformational” over three years. That pattern is realistic, not inconsistent.
We also know many departments operate in environments where procurement controls, risk policies, or leadership bandwidth constrain your ability to act on these levers. We’re not asking you to overstate activity. We’re asking you to capture your true operating environment: what’s real, what’s possible, and what’s still aspirational.
The framework mirrors the Provider Survey, where providers are asked to identify which levers clients are actually pulling. By comparing client and provider views, we’ll see where perception gaps lie.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
7. Better | Faster | Cheaper. When you think about the commercial upside you expect from GenAI over the next 36 months, how would you roughly weight the importance of each of the following?
Better: improved quality, risk management, and business outcomes
Faster: shorter cycle times, quicker turnaround, and higher throughput
Cheaper: lower total legal spend for comparable outcomes
Please allocate 10 points across the three categories to reflect their relative importance to your organization. Use whole numbers only (0-10). You may assign the same number of points to more than one outcome. Your three entries must add up to 10.
|
Weight |
|
|---|---|
|
Fewer Hours Paid For |
[DROPDOWN][DD] |
|
Lower Hourly Rates |
[DROPDOWN][DD] |
|
New Fee Structures |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This item is a weighting exercise. It is not a referendum on whether “better,” “faster,” and “cheaper” outcomes are desirable. In most conversations, when clients are asked which of the three they want, the practical answer is “Yes—we want all of them.” That response is completely reasonable, but it is not sufficiently specific to create alignment or illuminate where GenAI is expected to make the most meaningful commercial contribution.
Here, we are asking you to express relative importance. Allocating 10 points across the three categories simply reveals the shape of your priorities rather than suggesting any category is unimportant. In other words, this is not about whether good outcomes are good—it is about how you weight the commercial upside you expect GenAI to deliver over the next 36 months.
This clarity matters. Too often, the conversation collapses ideas into a single bundle. Providers frequently report that clients articulate all three dimensions, but they differ in their perceptions of which dimension ultimately drives behavior. By asking you to distribute points, we are able to surface specificity, compare differences across clients, and better understand the expectations gap—if any—between what clients say they prioritize and what providers believe clients prioritize.
No single allocation is “right.” The purpose of the question is simply to support a more coherent view of how the ecosystem is thinking about GenAI’s commercial potential and to create a shared language for discussing priorities with greater nuance.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
8. Expectations. How would you characterize your communicated expectations to your providers about their use of GenAI to perform your legal work, as well as how broadly and clearly you’ve communicated those expectations to them?
[Expectations] is the closest characterization of our externally expressed expectations on provider GenAI use. We have communicated our GenAI expectations to [Breadth] of our providers. The providers to whom we have communicated should be [Clarity] on our expectations of them regarding integrating GenAI into performing our legal work.
Expectation Options
Pending (still formulating our position on provider GenAI use)
Prohibitive (explicitly bar provider GenAI use in our legal work)
Discouraging (permit but have reservations or urge minimal use)
Neutral (leave GenAI decisions to our providers)
Encouraging (actively support, or even require, use of GenAI)
Nuanced (permit use but with conditions and context-specific rules)
Inconsistent (different messages through different channels)
Breadth Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
N/A – cannot approximate with any adequate degree of confidence
Clarity Options
Clear (our providers should understand our posture and attendant expectations)
Unclear (we, admittedly, have been vague and/or inconsistent)
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question shifts from what you’re doing to what you’re saying—the expectations you have communicated to your providers about their use of GenAI when performing your work, how widely those expectations have been shared, and how clearly they’ve been conveyed.
We know that many law departments are still formalizing their position on provider GenAI use. Some are actively encouraging experimentation; others are imposing restrictions or conditions. Some are somewhere in between—improvising as they go.
That fluidity is normal, but it has real consequences. The industry is currently operating in an environment where providers fear being too bold and clients assume they’ve already been clear, yet both sides routinely misread one another.
We’re asking you to characterize three related aspects of your posture toward provider use of GenAI:
- Expectation: What’s your actual stance? Are you supportive, restrictive, neutral, or still figuring it out?
- Breadth: How widely have those expectations been communicated across your provider base?
- Clarity: How confident are you that your providers actually understand what you mean?
This question helps measure how far you’ve gone from having an internal opinion to expressing an external position. It also gives context to later questions on responsibility and cost of use. If expectations haven’t been clearly communicated, it becomes difficult to assign accountability in downstream commercial conversations.
Experience shows that many law departments assume their providers “should know” their position on GenAI use because they’ve referenced it in an RFP, a security questionnaire, or a passing email. Providers, meanwhile, often claim to have received contradictory guidance from different points of contact inside the same client organization. By comparing this data with the mirror question in the Provider Survey, we’ll see whether clients believe they’ve sent a message providers claim not to have received—or vice versa. That gap is one of the most immediately actionable findings in the entire study.
We also recognize that some departments have intentionally chosen strategic ambiguity—keeping their position flexible while the technology and risk standards mature. If that’s true for you, say so. That’s a legitimate strategic choice, not a failure of governance.
Our goal is to benchmark how far the market has moved from speculation to communication—from thinking privately about GenAI to speaking publicly about it. Your answers will reveal whether the current state of “alignment” between clients and providers is real or assumed, clear or confused.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
9. Responsibility. In discussions about GenAI, clients often express frustration that providers are not moving fast enough: “They should anticipate our needs, bring us ideas, and deliver more value.” Providers, in turn, point out that they are commercial enterprises; they can only invest and innovate to the extent that clients reward and enable those behaviors.
Focusing on your external legal spend over the next 36 months, how should responsibility for clients realizing the economic upside from GenAI integration be distributed between (i) providers proactively improving delivery, pricing, or efficiency in ways that economically benefit you and (ii) clients explicitly defining expectations, applying commercial pressure, or reallocating work to achieve those benefits?
Then, in practice, how do you believe responsibility will actually be distributed?
Dropdown Options
0% Providers/ 100% Clients
20% Providers / 80% Clients
40% Providers / 60% Clients
50% Providers / 50% Clients
60% Providers / 40% Clients
80% Providers / 20% Clients
100% Providers / 0% Clients
|
How Responsibility Should Be Allocated |
[DROPDOWN][DD] |
|
How Responsibility Will Likely Be Allocated |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question explores accountability for translating GenAI’s potential into measurable, commercial outcomes benefiting clients—whether clients or providers should bear more responsibility for driving change, and how that dynamic is actually playing out.
From the client perspective, it may often seem self-evident that providers should lead. They are, after all, the experts in delivery—the ones closest to the work, the processes, the data, and the technologies that can make their work more efficient or more valuable. Clients can articulate outcomes—faster cycle times, more predictable pricing, higher quality—but it’s the providers that are best positioned to determine how to get there. Expecting clients to prescribe innovation in legal delivery is inconsistent with the professional division of labor that underpins the entire market.
Yet the provider’s position isn’t automatically wrong. Providers are, fundamentally, commercial enterprises. They must invest prudently and ensure those investments generate sustainable returns. When they deploy GenAI—through new licenses, infrastructure, training, or governance—they take on real cost and operational risk. And historically, the market hasn’t rewarded that risk. Clients often say they want innovation and value, but most still buy in ways that reward volume over efficiency and inputs over outcomes. The result is a misalignment where providers that modernize their delivery models can inadvertently erode their own margins.
From this vantage point, GenAI sits at the intersection of expectation and incentive. Clients expect providers to innovate, but many providers question whether the market truly compensates them for doing so. This question therefore invites a candid reflection:
- “Should” expresses your normative view of how responsibility ought to be shared if both sides behaved optimally—clients setting clear expectations and rewarding innovation, providers leading in how it gets done.
- “Will” expresses your practical view—what you think will actually happen given current behaviors, procurement norms, and inertia.
The difference between those two perspectives—the should/will gap—is revealing. A wide gap implies expectation without enablement: Clients believe providers should lead but recognize that the commercial system doesn’t yet reward leadership. A narrow gap suggests better alignment—where expectations are coupled with structures (budgets, sourcing criteria, pricing models) that make innovation a rational, rather than aspirational, behavior.
This framing mirrors the Provider Survey, where we ask providers to assess the same accountability split from their side. Comparing responses will illuminate whether each side believes the other is responsible for moving first—a coordination failure that helps explain why genuine transformation has been slower than the hype suggests.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
10. Cost of Use. Providers are investing in GenAI systems, licenses, and governance as well as the associated specialized personnel. These investments create internal costs that may or may not appear as distinct charges on client invoices. Do you believe all their GenAI-related costs should be absorbed by your providers as overhead (i.e., implicitly bundled into your fees)? Or are you open to some explicit cost recovery in specific circumstances?
If your organization already has a formal position, great (it should fit in the commentary box). But this is primarily a gut check since you likely don’t know what your providers are spending on GenAI, let alone how that spend is allocated or how it might be attributable to your matters. At this early stage, we’re after feelings, not analysis. This is a ballpark exercise intended to surface your subjective perspective of what percentage of provider GenAI-related costs should be absorbed as overhead by them versus what percentage might reasonably be charged back to you.
Absorbed by Providers as Overhead/Explicitly Charged Through to Clients
0% Providers/ 100% Clients
20% Providers / 80% Clients
40% Providers / 60% Clients
50% Providers / 50% Clients
60% Providers / 40% Clients
80% Providers / 20% Clients
100% Providers / 0% Clients
|
Distribution of Provider GenAI Costs |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question asks how you believe your external providers’ GenAI costs should be treated within your commercial relationships.
Providers are making real investments in GenAI—licenses, infrastructure, integrations, personnel, training—that will be recovered somehow. The question isn’t whether clients ultimately pay for them; clients do, because all costs are covered by revenue, and all revenue comes from clients (this is true for any business). The question is how those costs appear—as distinct charges or as part of providers’ overall pricing.
Both positions are defensible. Some clients prefer simplicity and price stability over granular accounting. Others prioritize visibility even if it adds complexity. There is no single right answer, and the market hasn’t yet settled on one. What’s most useful here is your instinct: how you currently think GenAI costs should be treated, given your commercial philosophy and tolerance for complexity.
We recognize that you won’t have a comprehensive financial analysis or a controlled experiment comparing outcomes. You’re not expected to. Your perspective likely comes from fragmented information—conversations, proposals, invoices, and internal discussions. Still, your organization must make real decisions amid uncertainty: setting budgets, negotiating fees, and defining policies. This question captures your working assumption about what feels fair, sustainable, and administratively sensible, even if that view is largely intuitive or inherited from precedent.
Think of the response options as marking a continuum of treatment:
- Fully Absorbed as Overhead → Keep things simple; let providers manage GenAI within existing fees.
- Conditional or Shared Recovery → Allow itemization or structured recovery only when demonstrably incremental and tied to your work.
- Explicit Cost Recovery → Prefer the transparency of separate, explainable GenAI charges, even if it adds complexity.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
11. Use Case Policies. What is your organization’s formal, communicated policy on your external providers incorporating GenAI to perform your legal work for the following use cases?
Dropdown Options
No Policy (no formal policy currently communicated)
Prohibit (usage not permitted)
Consent Required (usage prohibited absent express consent)
Notice Required (usage permitted with notice)
Permit (usage permitted)
Nuanced (policy communicated but varies at a granular level not properly captured by available options)
|
Policy |
|
|---|---|
|
Public Tools, Nonconfidential Work (Public tools = Gemini, ChatGPT, Claude, Perplexity, etc.) |
[DROPDOWN][DD] |
|
Public Tools, Confidential Work |
[DROPDOWN][DD] |
|
Private Tools, Non-Confidential Work |
[DROPDOWN][DD] |
|
Private Tools, Confidential Work |
[DROPDOWN][DD] |
|
Use Our Confidential Info to Train Their Models Only for Our Work |
[DROPDOWN][DD] |
|
Use Our Confidential Info to Train Their Models for Their General Work |
[DROPDOWN][DD] |
|
Use GenAI for Legal Research |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question explores how far your organization has gone in formally defining and communicating policies governing provider use of GenAI in delivering your legal work.
The goal is not to test your governance framework. It’s to understand where your lines are between comfort, caution, and control, and how consistently those lines are being drawn across use cases.
We recognize that most law departments are still evolving their positions. Some are moving quickly to codify rules. Others are intentionally waiting for the technology and regulatory environment to stabilize. Both approaches are defensible. What matters here is visibility: what has been decided, what hasn’t, and where nuance or ambiguity still dominate.
This question gauges the current state of risk governance—not to critique, but to benchmark. In practice, policies often lag far behind technology. This lack of guidance or a clear permission structure is where much of the latent friction between clients and providers originates. Documenting where the actual guardrails are—what’s permitted, what’s prohibited, and what’s still undefined—is essential for moving forward.
How clients draw these lines will determine how quickly and confidently providers invest in GenAI-powered delivery. If the prevailing message from clients is “not yet” or “only with consent,” the market will move slower, even if everyone privately believes GenAI is inevitable.
If, on the other hand, clients begin differentiating between risk contexts—for example, permitting private, governed tools while banning public ones—providers will have more clarity about how to scale responsibly.
The structure of this question—multiple discrete use cases, each with a simple policy dropdown—is designed for comparability and granularity. We know most organizations don’t have a single, universal policy statement. Instead, they have a patchwork of evolving positions: clear in some contexts, silent in others, and still under debate in a few. This table allows you to reflect that reality rather than forcing a single, oversimplified answer.
We also know that policy maturity doesn’t necessarily equal sophistication. A “No Policy” response isn’t a failure; it’s a snapshot. Many departments are deliberately waiting to align with enterprise-wide AI governance, or to learn from early adopters before codifying their own stance. Conversely, a “Prohibit” response might be a prudent placeholder rather than a permanent posture.
The purpose of this question is to map where the industry really stands, not where it claims to be. By comparing client responses with provider-reported policies, we can see whether the two sides’ perceptions of what’s “allowed” match—or whether we’re still operating in a world of polite fiction.
We’re not asking whether your GenAI policies are perfect. We’re asking whether they exist, how far they reach, and how they differ across real-world scenarios. Even if your current answer is “We’re still figuring it out,” that’s valuable. If you’ve communicated policies but you suspect they’re misunderstood or inconsistently applied, that’s valuable too.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
12. Maturity. How mature is your law department’s use of GenAI to deliver legal services internally today? How mature do you expect to be after the next 12 months and 36 months? What are your standards for external provider maturity? What is your impressionistic sense of whether those standards are being, and will be, met by your existing providers?
The first part of the question asks you to rate your own current and projected GenAI maturity levels. The second part asks after the maturity level you expect from your providers, now and in the future, as well as your perception of what percentage of your existing provider network does, and will, meet your expectations.
Understandably, you don’t know where most of your providers are with respect to GenAI maturity (among the reasons for this survey initiative). This question only seeks to surface your subjective perception of where your network stands and is headed. That is, lacking sufficient information, are you positive/hopeful or negative/doubtful?
Maturity Dropdown Options
Dormant (no formal activity; limited to individual curiosity or tinkering without organizational recognition)
Exploratory (isolated pilots or proofs of concept within a department or practice group; learning phase with no consistent framework)
Emerging (early repeatable use cases appear; governance conversations begin; some processes adjusted to integrate GenAI)
Operational (GenAI tools incorporated into regular workflows across multiple teams; usage is guided by policies and supported infrastructure)
Scaled (organization-wide adoption with high usage across many areas; systematic governance, budget allocation, and measurable business impact)
Share Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
|
To Date |
After 12 Months |
After 36 Months |
|
|---|---|---|---|
|
Your Law Department’s GenAI Maturity |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Expected External Provider GenAI Maturity |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Share of Existing Providers Meeting Maturity Expectations |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question gauges how far GenAI has progressed toward becoming actual infrastructure—how mature your law department’s use of GenAI has become, how mature you expect it to be, and what maturity you expect (and observe) among your providers.
It distinguishes between three related but distinct perspectives:
- Your department’s internal maturity
- The maturity you expect from your external providers
- The share of your current providers meeting those expectations
Together, these show not just your own progress, but your standards—and how closely your provider network aligns with them.
We’re asking you to characterize institutional maturity—the degree to which GenAI is embedded, normalized, and governed within your law department and your external ecosystem.
Each dropdown corresponds to a level of integration, moving from “Dormant” (no formal activity) through “Scaled” (organization-wide adoption with measurable business impact).
Most law departments have moved beyond curiosity about GenAI, but relatively few have yet built durable systems—policies, budgets, governance, and measurement—to make it operational. This question clarifies how far that transition has progressed.
Historically, the industry has seen long lag times between experimentation and institutionalization. We’ve watched “innovation” live for years in pilot purgatory: a handful of experiments that never reached scale, never made it into policy, and never got funding beyond the initial hype cycle. By tracking maturity explicitly, we can tell whether GenAI is following that same pattern or breaking it.
The time horizons (to date, next 12 months, next 36 months) help distinguish current state from direction of travel.
Maturity doesn’t change overnight. By capturing all three points, we can see whether law departments expect incremental improvement (e.g., Emerging → Operational) or whether they anticipate step-change transformation (e.g., Exploratory → Scaled).
This also gives us a way to compare client and provider evolution. Providers are being asked to rate their own maturity using the same scale. The cross-view will show whether clients’ expectations of provider readiness are realistic, aspirational, or divorced from how providers see themselves.
We recognize the provider portion will be impressionistic. We know you don’t actually know where your providers stand on GenAI maturity—not with precision, not at scale. You, at best, have fragments: what they say in meetings, how they describe capabilities in RFPs, the tone of their marketing, the sophistication of their questions. Those signals are limited and imperfect, but they still shape your assumptions and decisions.
You make sourcing and budgeting calls based on that impressionistic picture—because you have to. Having perfect visibility would require boots on the ground constantly, inside every provider’s organization. No one has that.
This initiative exists, in part, to close that gap—to give clients and providers a shared, more transparent view of where each side actually is, even if that view remains necessarily incomplete. We’re not asking for verification. We’re asking for your operating perception—the working theory you use to make real decisions when equipped only with partial information. Your views may be approximations, but they are still consequential in defining how the market moves.
Maturity is a measure of institutional absorption. It reflects when GenAI stops being a topic and becomes a capability—budgeted, governed, and expected. Every wave of legal technology has passed through this same cycle: early excitement, local pilots, incremental normalization, then uneven scaling. By locating where your law department and providers sit on that curve, we can understand where the industry truly stands, not just where it says it stands.
We also know that maturity is not destiny. Some organizations advance quickly but stall at “Operational.” Others stay at “Emerging” longer but build a deeper foundation for sustainable adoption. This question doesn’t reward speed; it rewards honesty.
If GenAI maturity remains embryonic today but you expect rapid progression over the next 36 months, say so. If you expect it to plateau—because budgets, regulation, or fatigue will slow momentum—that’s equally important. The maturity curve tells us whether the hype cycle is turning into real capacity or receding into cautious pragmatism.
We’re not grading your department or your providers. We’re benchmarking the state of institutionalization.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
13. Obstacles. Please identify one primary obstacle your organization faces in adopting or scaling GenAI and then rate the seriousness of the remaining obstacles.
Dropdown Options
Not An Obstacle
Minor Obstacle
Material Obstacle
Major Obstacle
PRIMARY Obstacle
|
GenAI Not There Yet |
[DROPDOWN][DD] |
|
Search-to-Implementation Speed & Costs |
[DROPDOWN][DD] |
|
Unclear ROI/Business Case & Attendant Budget Constraints |
[DROPDOWN][DD] |
|
Change Management & Cultural Resistance |
[DROPDOWN][DD] |
|
Talent & Skills Gap |
[DROPDOWN][DD] |
|
Data Readiness & Quality |
[DROPDOWN][DD] |
|
Legacy Systems & Integration Difficulties |
[DROPDOWN][DD] |
|
Information Security, Privacy, & Confidentiality |
[DROPDOWN][DD] |
|
Regulatory Volatility & Professional Responsibility Ambiguity |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question focuses on friction. After identifying what’s expected, we now ask what’s preventing progress—what most constrains your ability to adopt or scale GenAI in legal service delivery.
You are asked to identify a single “primary obstacle”—the most immediate, structural barrier your organization faces—and then to rate how serious the other listed challenges are. We recognize that every organization will face some combination of these issues, but the exercise of naming one as primary helps clarify what’s truly gating acceleration, not just complicating it.
We also recognize the irony: You’re unlikely to have a fully objective view of your own obstacles, much less their relative weight. Many of these factors overlap: Unclear ROI may be the surface expression of cultural resistance—i.e., the cultural resistance manifests as complaints about the lack of a clear business case. The categories are not meant to be neat or mutually exclusive; they’re designed to capture recurring patterns of friction that shape GenAI progress across the ecosystem.
GenAI Not There Yet. For some, the primary barrier is the technology itself—its current limits in accuracy or reliability within legal contexts. Even promising tools often fail to meet the profession’s threshold for confidence and defensibility. This can breed caution: the sense that GenAI is “almost but not yet ready” for high-stakes use. The resulting skepticism is rational, not regressive; it reflects the uniquely high reliability standards of legal work.
Search-to-Implementation Speed/Costs. Identifying, testing, and integrating tools takes time and money. Pilots are expensive, success rates are uneven, and lessons learned are hard to scale. Many organizations struggle to move from experimentation to operationalization because discovery cycles consume disproportionate bandwidth. For some, the obstacle is not so much resistance as it is simple capacity constraint: too many variables, too few hands.
Unclear ROI/Business Case and Budget Constraints. Even when enthusiasm is high, progress stalls without a credible economic case. Many leaders hesitate to fund initiatives that can’t demonstrate clear returns or client impact. The result: Projects pause not for lack of belief but for lack of a budget-backed rationale.
Change Management and Cultural Resistance. Organizations don’t adopt technology; people do. Time pressure, entrenched habits, and fear of obsolescence can all slow adoption. Lawyers in particular are conditioned to limit downside risks, prizing precedent and precision over experimentation. “Change management” in this context is about alignment, not attitude: creating space, safety, and incentive for behavior change at scale.
Talent and Skills Gap. There’s a shortage of professionals who can bridge the gap between technology and practice. Knowing what GenAI can do is not the same as knowing how to make it work inside a matter, a team, or a workflow. The challenge is not just hiring technical talent—it’s cultivating applied literacy among lawyers, operations, and technologists alike.
Data Readiness and Quality. GenAI depends on what it can access and trust. In most legal organizations, knowledge is trapped across systems, formats, and minds. Even the best models underperform when data is fragmented, inconsistent, or poorly tagged. This obstacle is foundational: Until knowledge is structured and findable, GenAI remains limited to shallow use cases.
Legacy Systems and Integration Difficulties. Most organizations operate within complex, interdependent IT ecosystems. Introducing GenAI often means confronting technical debt. Integration hurdles make innovation feel like surgery: invasive, slow, and risky.
Information Security and Client Confidentiality. Legal work involves sensitive, privileged, and regulated information. Many organizations are constrained not by unwillingness but by duty. Even if tools are secure in principle, risk tolerance varies widely. Until model training, data isolation, and auditability mature, this remains a universal gating concern.
Regulatory Volatility and Professional Responsibility Ambiguity. Rules are evolving faster than clarity. Bar guidance, data residency laws, and AI governance frameworks differ by jurisdiction and change frequently. Providers are caught between undercompliance risk and overcompliance paralysis. The absence of consistent standards breeds confusion about what is “safe enough” and consternation about how quickly that can change.
We recognize that these obstacles overlap. They’re not meant to be silos; they’re interdependent symptoms of a market still forming its norms. You’re not expected to have perfect clarity on what’s blocking progress, but you’re making decisions based on your best interpretation—and that’s what we’re capturing here.
Finally, if your main challenge isn’t reflected here—or if the interplay among them is more complicated—use the optional commentary box to elaborate, clarify, or add missing context.
Optional Insights Section
Everything that follows is optional. The goal is to be selective—share where you have perspective, insight, or lived experience that others can benefit from. If a question sparks a strong answer, we’d love to hear it. If not, skip it.
These questions go beyond benchmarking. We use the inputs here to sharpen our analysis, inform L.E.G.A.L.-related programming (including events), and surface themes worth discussing across the ecosystem. By default, any use of these inputs is non-attributed: We reference them only in paraphrased, composite form designed to prevent identification of any organization or individual.
Your responses may also open the door to follow-up opportunities (e.g., opt-in case studies, working sessions, or speaking invitations). No one will be quoted or attributed—named or unnamed, verbatim or paraphrased—without express, written permission through a separate consent process.
Our objective is to gather useful “color” while limiting burden. A few high-signal observations beat a long list of hypotheticals. The survey is persistent: You may have nothing to add today, and that’s fine—you can always return later.
14. Doing | Planning | Thinking. Describe how GenAI is changing, or could soon change, the economics of legal service delivery from your law department’s perspective—whether by reducing costs, shifting work, keeping work in-house, or changing how providers are paid.
Doing. Which GenAI initiatives have already changed the economics of legal service delivery for your organization, and how have the economics changed (e.g., savings, more for your money, fee model shifts)?
Planning. Over the next 12 months, which resourced/in-flight GenAI initiatives do you expect to most change the economics of legal service delivery for your organization, and how do you expect the economics to change (e.g., savings, more for your money, fee model shifts)?
Thinking. Over the next 36 months, which potential GenAI initiatives do you anticipate could most change the economics of legal service delivery for your organization, and how would you expect the economics to change ( e.g., savings, more for your money, fee model shifts)?
ANNOTATION: We’re asking how GenAI is beginning—now or soon—to show up in your financial reality: what you spend, how you spend it, and with whom. This question mirrors the Provider Survey, but from your perspective as a client. It explores where GenAI is beginning to change the economics of legal service delivery: how much you spend, how work is distributed, and how you and your providers capture value.
We know most law departments are still early in this process. That’s to be expected. Even directional or emerging examples help us understand where GenAI is making a real-world difference—and where it’s still potential rather than practice.
Here’s how to think about it:
- Doing: Where GenAI has already delivered economic results: cost savings, faster cycle times that free budget, or new fee models that reward efficiency
- Planning: Where you’ve funded or committed to GenAI initiatives expected to affect spend, sourcing, or structure within the next year
- Thinking: Where you see longer-term opportunities for GenAI to reshape the commercial landscape over the next three years—whether through automation, analytics, or new delivery models
You don’t need to quantify everything. We’re looking for directional indicators of change—cases where GenAI has become a factor in your budgeting, sourcing, or provider decisions. If you haven’t yet seen commercial impact, that’s still valuable insight; it helps us calibrate where clients collectively are on the adoption curve.
15. Provider Positive Example(s). Please share an anonymized example of an external provider adapting effectively to GenAI—integrating GenAI into legal service delivery in a manner that factored into your commercial decisions (e.g., work allocation, fee models) or delivered demonstrable benefits to your organization (e.g., measurable savings, tangible value). One example is sufficient; if others come to mind, you’re welcome to include them.
ANNOTATION: This question invites you to identify an anonymized example of a provider (law firm, ALSP, or other external partner) that has adapted effectively to GenAI in a way that mattered to you commercially.
We’re looking for evidence of meaningful impact: where GenAI capability, applied intelligently, influenced how you allocated work, structured fees, measured performance, or captured value.
One strong example is enough.
We’re seeking real-world instances where providers’ use of GenAI has changed clients’ perception of their value proposition—not theoretically, but in the actual economics or quality of delivery. This could include:
- Efficiency or speed: Delivering work faster without loss of quality
- Innovation in process: Redesigning workflows or integrating GenAI tools to remove friction or duplication
- Pricing impact: Offering new or improved fee arrangements underpinned by GenAI-enabled productivity gains
- Insight or collaboration: Bringing data-backed ideas that changed how you approached scoping, matter management, or risk
- Governance and transparency: Proactively managing GenAI risk in ways that built your confidence rather than testing it
You can describe the example at any level of detail you’re comfortable with—matter-specific, relationship-level, or generalized—but please anonymize where necessary.
For years, much of the conversation around “innovation” in legal services has been theoretical. Providers have claimed to innovate; clients have claimed to reward it; and yet when pressed, both sides struggle to point to specific, replicable examples where innovation actually altered the commercial reality of the relationship.
This question aims to change that dynamic—to replace anecdotes about intent with evidence of outcomes.
By collecting and anonymizing examples of where providers are genuinely making GenAI work, we can show that progress is happening, even if incrementally or unevenly.
That evidence helps in three ways:
- It separates signal from noise. We learn what kinds of GenAI-enabled efforts actually create client value, rather than just headlines.
- It gives credit where it’s due. Providers that are genuinely advancing capability should see that reflected in market understanding.
- It helps clients learn from each other. If a certain approach has worked well elsewhere, that pattern can inform your own strategy—without naming names or breaching confidentiality.
We’re not looking for perfection. We’re looking for proof that “better” is possible—that at least some providers are converting promise into practice. Even short, anonymized examples (e.g., “A global firm applied GenAI to first-draft NDAs, cutting cycle time by half and eliminating low-value review”) help illuminate what practical success looks like—something the entire market needs more of. Every example we collect helps shift the conversation from skepticism to substance, from “who’s talking about GenAI” to “who’s making it work.”
16. Provider Negative Example(s). Please share an anonymized example of an external provider NOT adapting effectively to GenAI—e.g., missing key implications for legal service delivery or business models. Please indicate how that shortfall affected your commercial decisions (e.g., work reallocation, loss of opportunity, change in fee structure). One example is sufficient; if others come to mind, you’re welcome to include them.
ANNOTATION: This question seeks a concrete example of where resistance or misjudgment around GenAI cost a provider. Where the previous question highlighted what’s working, this one asks: Where did a provider misread the moment?
We’re seeking real-world instances where a provider’s approach to GenAI—its posture, pace, or communication—proved too cautious, too performative, or too detached from commercial reality.
Examples might include:
- Defensive inertia: A provider insisting it needs “client demand” before investing, effectively asking to be sold on GenAI by its clients
- Policy paralysis: Excessive internal debate used as a shield for inaction
- Superficial signaling: Press releases and “AI task forces” with no visible change in delivery or pricing
- Governance overreaction: Blanket bans on GenAI use that cripple efficiency without enhancing safety
- Misreading the client signal: Treating client curiosity as an RFP checkbox rather than a market warning
For decades, clients have urged innovation yet moved little work. Providers learned a rational lesson that talk is cheap. That history explains their caution—but it also traps them.
An open question is whether GenAI changes the calculus—whether the cost of waiting now exceeds the comfort of caution, and whether inaction is beginning to carry visible penalties.
Across anonymized responses, we’ll be able to see whether defensive inertia remains a safe commercial bet or is finally starting to erode share and standing.
We phrase the question parallel to the “positive example” because both sides of the ledger matter. Positive examples show where GenAI is creating advantage. Negative examples show where failing to adapt is creating loss. Both are indicators of market change.
We know you may hesitate to criticize. That’s why this section is optional and anonymized. We’re not looking for blame; we’re looking for signals—instances where old reflexes met new reality and lost. Even brief accounts help reveal whether this time is different—whether inertia finally has a price.
The purpose isn’t to embarrass but to document a turning point: the moment when GenAI stopped being theoretical and started reshaping who wins work. Your candid example—anonymized, aggregated, and contextualized—helps show whether resistance remains viable or whether the market has begun rewarding readiness and penalizing passivity.
17. Commercial Pressure/Lever Example(s). Please share an example of your law department impactfully applying commercial pressure or utilizing commercial levers (e.g., reallocation of work/budget) related to the integration of GenAI into legal service delivery. One example is sufficient; if others come to mind, you’re welcome to include them.
ANNOTATION: For decades, law departments have been vocal about innovation and value yet relatively quiet when it comes to using commercial muscle to reward either. This question probes whether GenAI has started to change that—whether rhetoric has begun to translate into resource movement—by asking for a concrete, anonymized example of how your law department has translated GenAI-related expectations into commercial behavior. The intent is to understand what “acting on GenAI” looks like in practice.
One example is enough. A short, specific story beats a long, abstract answer.
We’re seeking instances where law departments used commercial mechanisms—budgets, scoping, sourcing, pricing, or performance management—to express GenAI expectations in a way that mattered economically. That could mean:
- Rewarding capability: Steering work or wallet share toward a provider that demonstrated credible GenAI integration
- Penalizing inertia: Withholding or reallocating work from providers showing no meaningful progress
- Reframing value: Negotiating new fee structures, “more-for-the-same” commitments, or efficiency guarantees tied to GenAI-enabled delivery
You don’t need to quantify the impact precisely—directional or qualitative descriptions (“We moved X type of work to a provider that was demonstrably further ahead”) remain valuable.
In simple terms, we’re asking: Have you actually pulled any levers, or is GenAI still mostly conversation?
Historically, provider economics have been protected by client inconsistency. Even when clients complained about rates or demanded efficiency, most continued to buy the same way—rewarding inputs, not outcomes. Firms learned to wait out the noise.
GenAI might finally disrupt that pattern, or it might not.
This question helps test whether that shift has begun—whether GenAI is becoming a real factor in procurement, pricing, and portfolio management or remains largely confined to policy talk and curiosity pilots.
We want stories of pressure that produced motion—where conversation gave way to consequence. We’re not asking for public case studies or press releases. We’re asking for the internal version: the actual moment of decision when you acted differently because GenAI entered the equation. It could be as modest as “We gave additional scope to a provider that showed credible GenAI governance” or as direct as “We cut a panel firm for refusing to engage.” Both are valuable—they show where the market’s invisible hand is starting to push.
Pressure only matters if it moves something. This question identifies where clients have stopped simply saying GenAI matters and started making it matter—commercially, not rhetorically. Historically, providers could afford to wait out client enthusiasm, knowing the next budget cycle or leadership change would blunt momentum. The expectation is that the examples you share show that the equilibrium is shifting: that silence, inertia, or defensiveness now carries commercial cost.
18. Provider Misconception & Perspective Shift. What is the most damaging misconception providers commonly harbor about the use of GenAI in legal service delivery? From your perspective, if more providers could understand one thing better about working with their clients on GenAI integration, what would make the biggest difference in furthering both sides’ best interests?
ANNOTATION: This question asks you to identify the most damaging misconception providers hold about GenAI—and what single change in perspective would most improve collaboration, alignment, and results between you and them.
We know that “misconception” can sound adversarial. That’s not the intent. The goal is to illuminate where even well-meaning providers are seeing the problem through the wrong lens—where their assumptions about GenAI, risk, or client priorities diverge from what clients actually value.
We’re asking you to describe, in your own words, what providers consistently get wrong about GenAI—not at the technical level, but at the strategic and relational level. Your answer helps pinpoint the mental and cultural obstacles that still hold back real progress. For example:
- Waiting for demand: Believing that GenAI adoption should be client-led; that they must be “convinced” before acting
- Overplaying caution: Treating every use case as a reputational hazard rather than a performance opportunity
- Performative compliance: Thinking that “having a policy” equals “having a strategy”
- Tech over substance: Confusing tool deployment with transformation—assuming a license purchase equals innovation
- Misreading signals: Assuming client curiosity is evangelism rather than a warning
We’re not asking you to lecture or scold. We’re asking you to highlight the blind spot that, if corrected, would do the most to move the market forward—where a shift in perspective would make providers both more commercially relevant and more trusted.
This question helps identify perceptual inertia. Clients aren’t trying to sell firms on GenAI; they’re signaling urgency. Many providers have not yet grasped that this may be more than just another hype cycle; it could be a reckoning.
We’re not looking for generalities (e.g., “They’re too slow,” “They don’t listen”). We’re looking for specificity in the misconception that matters most—the one that repeatedly derails understanding, proposals, or progress. That’s why the question is singular and open-text. We want your unfiltered articulation of the mindset that keeps even sophisticated providers from seeing the landscape the way clients see it.
By surfacing and anonymizing these insights, we can feed them back to the market in composite form—not as complaint, but as guidance. The goal is alignment. If providers can see how their risk framing and commercial pacing are being perceived, they can recalibrate. That, more than any single policy, accelerates mutual progress.
In short, your honesty here helps everyone get unstuck.
19. Pivotal Lesson. What is the single most valuable insight or lesson your organization has learned so far from GenAI adoption—whether strategic, cultural, technical, or commercial?
ANNOTATION: This question invites reflection rather than reporting. We’re asking for the one lesson—the insight that has most reshaped how your organization thinks about, invests in, or approaches GenAI in legal service delivery.
Organizations often reach a moment of clarity as GenAI moves from concept to lived practice. The lesson may have emerged from planning, experimentation, execution, or client interaction. What matters is that it changed your understanding of what meaningful adoption truly demands.
Lessons take many forms. Some are strategic, revealing that scale follows structure, not enthusiasm. Others are cultural, showing that change rarely happens by persuading minds first and altering behavior later—it’s usually the reverse. Many are commercial, learned through the tension between how providers say they will innovate and how little they actually progress until they face economic consequences.
These are truths earned through experience—realizations forged in practice, not theory. We’re not seeking tidy conclusions about success or failure. The most instructive insights are often still in motion. Unresolved lessons are part of the learning story. Even brief responses carry weight. Collectively, they help chart the profession’s learning journey: how organizations are turning experimentation into understanding, and understanding into sustainable change.
20. Looking Ahead. What potential GenAI capability or functionality—not yet available today—would most significantly improve your ability to deliver legal services?
ANNOTATION: This question looks forward from what you’ve learned to what you now hope for. We’re asking not for prediction but for informed imagination: What capability, if it existed, would materially change your ability to deliver value to your organization?
The intent is to surface directional insight about what law departments actually need next for GenAI to become transformative rather than incremental. That could be technical (applications that reliably handle privileged data), structural (seamless workflow integration across systems), or conceptual (explainability robust enough to start to futureproof against the coming regulatory tsunami). Whatever the form, we’re interested in the frontier you now recognize after what you’ve experienced so far.
Your response can be aspirational, but it should still be grounded in the realities you’ve already encountered. Truths earned through experience—what you’ve discovered about GenAI’s current limits—are often the best guide to what would unlock its next stage of value. Consider what would remove a constraint, close a gap, or convert curiosity into capability.
Even if your answer borders on speculative, it tells us something about your direction of travel—what kinds of capability you believe would make GenAI genuinely consequential for professional work.
This question closes the loop begun with Pivotal Lesson. Where that question captured learning already earned, this one captures learning projected forward—how experience informs aspiration. Together, they describe both sides of the learning journey: what you now understand, and what that understanding tells you to want next.
21. Anything else? This optional catch-all question seeks, but does not require, information, observations, or opinions not elicited above that you consider important enough to share.
ANNOTATION:This optional catch-all question seeks (but does not require) information, observations, or opinions not elicited above that you consider important enough to share.
Suggestions. What recommendations do you have to improve this survey?
ANNOTATION: This closing question is aimed at refining the instrument itself. We are asking for your candid input on how this survey could be clearer, more efficient, or more valuable.
Examples of useful feedback might include:
- Wording changes to reduce ambiguity or friction
- Adjustments to response formats (e.g., ranges, dropdowns, free text)
- Additions or deletions of topics to better capture reality
We are committed to serving the entire ecosystem. The goal is to advance the collective conversation in ways that are maximally useful and minimally burdensome. We recognize the tension between those two aims, and we welcome input on how best to resolve it.
There is no pride of authorship here. This survey is not fixed. Evolution is necessary and welcome. The healthiest evolution will be responsive to the candid feedback of those most invested in the outcome. Your suggestions directly shape how this effort improves, grows, and continues to deliver value for all participants. Thank you!
L.E.G.A.L. Client Acknowledgment
L.E.G.A.L. (Leaders Exploring Generative AI in Law) is a permissioned intelligence system designed by LexFusion Intelligence, an arm of Baretz+Brunelle LLC. This Acknowledgment applies to your submission of responses to the L.E.G.A.L. Client Survey. The full L.E.G.A.L. Nondisclosure Policy is available here.
By submitting responses, you acknowledge and agree to the following.
1. Purpose and Design
L.E.G.A.L. is a standardized, reusable system designed to reduce fragmented market questionnaires while enabling longitudinal, behavior-grounded intelligence and benchmarking about GenAI in legal services.
The Client Survey captures demand-side expectations, priorities, and observed impacts. It is designed to support longitudinal analysis, shared benchmarking, and clearer alignment between clients and providers—without exposing individual participants.
2. No Attribution (Client Identity Protection)
L.E.G.A.L. does not share client identities, participation status, or response completeness with anyone, including providers or other clients. Participation in the Client Survey does not result in identification of your organization to any third party, including:
- No identification of your organization to providers
- No identification of your organization to other clients
- No identification of your organization as having participated in the Client Survey
- No attribution of any response, observation, or benchmark position to your organization
3. How Your Responses Are Used
Client Survey responses are used only in de-identified and/or aggregated form for benchmarking and analysis.
Participation in the Client Survey entitles your organization to receive the composite benchmark report shared with all L.E.G.A.L. participants. The composite report is de-identified and/or aggregated and is designed to provide a market-level point of reference. Where visualizations include distributions (e.g., dot plots), individual responses appear only as unlabeled, non-attributed points and are provided only when minimum thresholds are satisfied. For program-wide use:
- No client organization is identified or identifiable
- Participation status is not disclosed
- Responses appear only in fully de-identified, aggregated, or synthesized form
- Open-text responses (if any) are referenced only as paraphrased composite themes with no attribution
Additional, tailored benchmarking outputs are available if you also request Provider Survey responses and the applicable minimum thresholds are met. Those outputs may include de-identified/aggregated comparisons of your provider panel against market benchmarks and, where appropriate, comparisons that help interpret provider-side results alongside your own demand-side posture.
4. Persistence, Submission (Saving), Authorization, and Collaboration
Survey responses are retained as a persistent baseline to support longitudinal analysis and reduced respondent burden over time (e.g., enabling you to update rather than start from scratch in future rounds).
Submission saves current responses, including in-progress responses. You may submit the survey multiple times as you refine responses. Submission alone does not release your responses for use in de-identified benchmarking.
Release of responses for the purpose of benchmarking is separate from submission. At the end of the survey, you are asked to confirm whether your submitted responses may be included in L.E.G.A.L.’s de-identified and/or aggregated benchmarking datasets (including the program-wide composite market report). Authorization is controlled by the Acknowledgment checkbox below:
- If the box is checked, inclusion is authorized as described
- If the box is unchecked, inclusion is not authorized
You may submit the survey with the authorization box unchecked in order to save responses. Authorization may be withdrawn at any time by unchecking the box. Withdrawal applies prospectively.
Submission is required in order to enable collaboration. Because submission functions as saving, submission allows internal collaborators to access the survey, continue work, and update responses over time.
5. Contact Information
Contact information entered in the survey will be retained solely for program administration. Contact information will not be shared with any third parties, including your legal service providers. Contact information may, however, be used to facilitate coordination within your organization (e.g., routing subsequent registrations to your organization’s established Primary Point of Contact) consistent with the original business purpose for which contact information was provided.
6. Questions
All questions should be directed to LexFusion Intelligence at LFIntel@baretzbrunelle.com