L
E
G
A
L
L
E
G
A
L
LEADERS EXPLORING GENERATIVE AI IN LAW
LEADERS EXPLORING GENERATIVE AI IN LAW
Provider Survey Annotations
Moving the Needle with GenAI
This survey seeks insights into GenAI’s commercial impact. Our focal point is GenAI initiatives that have changed how legal work is delivered, packaged, and priced, leading to attendant shifts in commercial dynamics, including how work is allocated, what it costs, and how fees are structured.
Our framing intentionally excludes a significant portion of GenAI-related activity. We are prioritizing use cases with visible, client-facing impact—initiatives that drive new work, deliver measurable savings, or enable new pricing approaches. In short, we’re looking for examples where GenAI has become a factor in client decisions or economic outcomes.
Given the relatively early stage of GenAI adoption, some respondents may be challenged to identify GenAI initiatives that have already produced commercial impact (“Doing”). We are, however, also forward-looking, with questions that orient toward both the near term (“Planning”) and the medium term (“Thinking”). If a respondent is not doing, planning, or thinking about any GenAI initiative that will result in any client-visible commercial impact, that too is informative.
We recognize that commercial impact is the product of many factors. For our purposes, GenAI need not be the sole determining factor, but rather recognized (by you) as a key contributing factor to commercial outcomes.
ANNOTATION: This survey is designed to surface the clearest signal we can extract from an otherwise noisy landscape: commercial impact. We recognize that economic outcomes are lagging indicators—and that much of the activity around GenAI in legal services is still formative, experimental, or internal. That’s understood. It’s also exactly why we’ve chosen this framing.
Our objective is not to capture every experiment or back-office deployment. Instead, we are zeroing in on where GenAI shifts the economics of legal service delivery: who does the work, what clients pay, and how pricing models are evolving. This likely represents only a narrow slice of your GenAI journey—but it’s the one that speaks most directly to real-world traction.
We recognize that not everything that matters shows up on the balance sheet immediately. In addition to demonstrable savings and getting more for their money, clients prize speed, quality, predictability, risk reduction, etc. Early GenAI gains often register first as shorter cycle times, higher first-pass accuracy, fewer rework loops, or smoother staffing. These are leading indicators. If, however, these improvements are substantial and sustained, they should convert into commercial signals over time: higher win rates, greater share of wallet, measurable cost-to-serve reductions, and a shift toward better-calibrated fee arrangements that reflect a less labor-dominant approach to legal service delivery. Said differently, it would be odd to consistently do work demonstrably better and faster yet see zero impact on how work is awarded or priced. If commercial impact never materializes, delivery impact is hard to credit.
At present, commercial impact may well be modest or still emerging. That’s why we’ve structured around Doing, Planning, and Thinking. “Doing” keeps us anchored in lived experience, while “Planning” and “Thinking” allow us to map the road ahead. The intent of the Doing | Planning | Thinking triad is to balance rigor with inclusivity. It recognizes that measurable commercial outcomes are rare but emerging. This progression creates a temporal spectrum of maturity, allowing benchmarks across organizations at different stages. The narrative also normalizes partial progress, encouraging candor by framing not-yet-realized outcomes as data, not deficiency.
Please do not account for everything. Rather, please focus on the GenAI use cases and initiatives that have already changed (Doing), are on track to change (Planning), and have the most potential to change (Thinking) your commercial relationships with clients in observable ways—whether that means new work for you, cost savings for them, or new pricing dynamics for both.
This is our lens. It is how we focus.
1. Doing | Planning | Thinking. Use the provided text boxes in this section to enter as much information as you deem relevant and sufficiently responsive. Alternatively, you may choose (it is optional) to complete pertinent sections of the Case Study Canvas and reference as many canvases as needed. Even if you do not make direct use of the Case Study Canvas, reviewing it should help you think through and frame your responses.
Doing. Which GenAI initiatives have already changed your client commercial relationships (e.g., new work, savings, fee model shifts), and what are the demonstrable benefits for clients (i.e., that factor into their commercial decisions or deliver positive, visible-to-them commercial outcomes)?
ANNOTATION: We’re looking for the moments where GenAI stopped being an internal experiment and started influencing how clients buy, budget, or behave—any signal that commercial impact has begun to materialize. This question seeks specific examples of GenAI initiatives that have already made a visible difference in your client relationships—instances where technology has started to translate into tangible, commercial outcomes.
We’re interested in the proof points of impact, not the pilot programs. Think about where GenAI has changed the way your organization delivers work and how clients perceive or value that change. This could include:
- Winning new work based on GenAI-enabled capabilities
- Demonstrably improving speed, accuracy, or predictability in ways clients reward
- Delivering savings
- Delivering efficiency gains that influenced fee structures or renewals
- Introducing new pricing models, subscription approaches, or service packages
- Retaining work that might otherwise have moved elsewhere because of GenAI-driven differentiation
The emphasis is on client-visible results—where GenAI has become a factor in a client’s commercial decision, even if the outcomes are modest or emerging. We’re not measuring internal productivity in isolation but rather the external signal that productivity creates.
If examples are still early or limited, that’s fine. Even directional or partial stories help map where GenAI is beginning to “show up” in the market.
Planning. Which already-resourced GenAI initiatives do you expect to most change your client commercial relationships over the next 12 months (e.g., new work, savings, fee model shifts), and what are the expected benefits for clients (i.e., that will factor into their commercial decisions or deliver positive, visible-to-them commercial outcomes)?
ANNOTATION: This question asks where you expect GenAI to move from promise to proof within the next year—initiatives that have left the whiteboard and entered the budget. The focus is on what’s next: the GenAI initiatives that are already funded, resourced, or otherwise underway and that you expect to deliver visible commercial change for clients in the next 12 months.
We’re looking for near-term, real-world momentum. Think about projects that have cleared the experimental stage: budgeted pilots, tool integrations, workflow redesigns, or client-facing solutions now in rollout. These initiatives should have credible paths to commercial relevance, even if their impact has not yet fully materialized.
Consider where GenAI will likely begin to show up in client economics or decision-making. For example:
- Anticipated savings or efficiency gains reflected in pricing or billing
- New service offerings built around GenAI capabilities
- Evolving fee structures or delivery models enabled by GenAI efficiencies
- Co-developed initiatives with clients that demonstrate shared value creation
You don’t need certainty—what matters is reasonable expectation backed by resources and intent. “Planning” captures the middle ground between concept and proof: initiatives that have organizational commitment but may not yet have market validation.
If few initiatives meet that bar, that’s still useful insight. The absence of funded, near-term projects is itself an indicator of where the organization stands on GenAI.
ANNOTATION: This question explores your medium-term outlook—how you expect GenAI to change the commercial landscape of client relationships over the next three years.
Here, we’re asking you to think beyond current projects and budgets. What’s on the horizon that could materially alter how clients engage with you—how they buy, evaluate, or value your services?
The focus is not on prediction for prediction’s sake, but on your strategic imagination grounded in experience.
Consider:
- How might maturing GenAI capabilities—legal-specific or general-purpose—expand what you can credibly offer or compress what clients expect to pay?
- Could GenAI shift staffing models, leverage ratios, or the economics of certain practice areas?
- Might it enable new fee structures, delivery partnerships, or embedded client solutions?
- Are there market catalysts (e.g., regulation, platform consolidation, client procurement trends) that could accelerate or redirect your plans?
You don’t need a road map, just a reasoned view. “Thinking” captures the stage where concepts start forming strategy—where an organization begins to identify the plausible next inflection points for commercial impact, even if implementation is still uncertain.
Speculative is fine; untethered is not. We’re looking for forward signals grounded in your understanding of client behavior, competitive dynamics, and internal capacity.
If you’re unsure what might change, that’s data too. It suggests the industry remains in a period of watchful adaptation—a finding as valuable as bold forecasts.
2. GenAI Strategy and Structure. Does your organization have the following (or a functional equivalent)?
Dropdown Options
Do Not Have (no such policy, role, or structure in place)
Have (formal policy, role, or structure in place)
Functional Equivalent (not the exact form, but something materially similar serving the same purpose)
In Progress (actively developing or recruiting; more than an idea, but not yet complete)
|
Documented GenAI Strategy or Framework |
[DROPDOWN][DD] |
|
Documented GenAI Risk Management Framework |
[DROPDOWN][DD] |
|
Documented GenAI Responsible Use Framework |
[DROPDOWN][DD] |
|
Documented GenAI Vendor Risk Assessment Process |
[DROPDOWN][DD] |
|
Documented Client Communication Framework on GenAI |
[DROPDOWN][DD] |
|
Formal GenAI Training Program |
[DROPDOWN][DD] |
|
Chief Innovation Officer/AI Officer |
[DROPDOWN][DD] |
|
Director of Innovation/AI |
[DROPDOWN][DD] |
|
AI/Innovation Steering Committee |
[DROPDOWN][DD] |
|
Dedicated Applied AI Roles (Internal Facing) |
[DROPDOWN][DD] |
|
Dedicated Applied AI Roles (Client Facing) |
[DROPDOWN][DD] |
|
Legal Technology Subsidiary with GenAI Offerings |
[DROPDOWN][DD] |
|
GenAI Co-development Collaborations with Third Parties |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question maps how organizations are structuring GenAI leadership, governance, and accountability. It’s designed to accommodate the wide variety of titles, frameworks, and terminologies that providers use in practice.
We know that not every organization uses identical labels. The aim here is to understand what functions exist, not whether they’re called exactly what’s listed. For instance, an “Innovation Board” might serve the same purpose as a “GenAI Steering Committee.” “Client Tech Integration Policy” might effectively cover “GenAI Client Communication Protocols.”
That’s what the “Functional Equivalent” option is for—it lets you answer “yes” where your existing structure does the same job, even if not in the same form or under the same name.
This flexibility isn’t semantic—it’s analytic. The goal is to capture whether the function is being performed, not whether the title matches the template.
When we say “documented,” we mean that the policy, role, or structure exists in a durable written or digital form—something that can be referenced, shared, or applied consistently. It does not need to be codified in a formal manual or policy binder. If it’s captured in a memo, a governance deck, an internal FAQ, or a SharePoint page—and it actually guides behavior in practice—it qualifies as documented. The purpose here is not formalism but traceability: Can someone find it, read it, and rely on it?
GenAI Strategy or Framework. This refers to your overarching road map or articulation of how your organization approaches GenAI—its objectives, principles, and boundaries. It may take the form of a slide deck, an internal memo, or a written statement of priorities. What matters is that it captures the why and how of GenAI adoption, even if it isn’t labeled a “strategy document.”
GenAI Risk Management Framework. A set of policies, processes, or controls addressing data security, confidentiality, privacy, model provenance, and regulatory compliance in the use of GenAI. It counts whether it’s centralized (firm-wide) or distributed (practice-specific), so long as it reflects a defined method of identifying, assessing, and mitigating GenAI-related risks.
GenAI Responsible Use Framework. A governance construct focused on ethical and professional considerations like fairness, bias, explainability, and accountability. This framework typically defines what “responsible” use means in your firm’s context and aligns GenAI activity with professional duties, bar guidance, client expectations, or internal codes of conduct.
GenAI Vendor Risk Assessment Process. A formalized or repeatable process for vetting GenAI vendors before procurement or deployment. This includes reviewing data handling, licensing, indemnity, security, and reliability standards. A written checklist, a standardized diligence questionnaire, and a defined internal approval path all qualify as documentation.
Client Communication Framework. This refers to the internal guidance or policies governing client disclosures—when, how, and under what circumstances clients are informed of GenAI use in their matters. It often includes rules for notice, opt-in, or opt-out practices and may cover both proactive (engagement letters, FAQs, playbooks) and reactive (upon inquiry) communication. The key is that the firm has articulated how transparency is handled, even if approaches vary by client or practice.
GenAI Training Program. A structured curriculum or set of organized learning opportunities—whether internal modules, live workshops, certifications, or external courses—designed to build GenAI fluency across roles. The focus is on deliberate instruction, not one-off demos or curiosity-driven exploration. Programs count even if voluntary, provided they have defined content, cadence, or completion tracking.
Chief Innovation Officer/Chief AI Officer. An executive or partner-level role formally accountable for steering innovation or GenAI strategy at an enterprise level. This person typically reports to firm leadership and is responsible for setting priorities, allocating resources, and representing the firm’s position internally and externally. A title variation (e.g., “Head of AI Strategy”) counts if the mandate and level align.
Director of Innovation/Director of AI. A senior operational leader focused on execution and delivery—overseeing day-to-day progress of GenAI initiatives across teams, practices, or client engagements. The emphasis here is on translation of strategy into implementation: managing pilots, rollouts, vendor relationships, or workflow redesign. The title may differ, but the functional scope should include operationalizing AI in practice.
AI or Innovation Steering Committee. A cross-functional governance body that coordinates GenAI and innovation initiatives across practice, IT, KM, risk, and marketing. This committee typically provides oversight, ensures resource alignment, and arbitrates priorities among competing internal initiatives. It counts whether formalized in a charter or operating by consensus, so long as it meets regularly and has clear remit.
Applied AI Roles (Internal-Facing). “Applied AI” refers to professionals who build, adapt, or deploy GenAI in real workflows as opposed to purely research or policy functions. Internal-facing applied AI roles are dedicated positions (meaning GenAI is a defined part of their job description or primary focus) that work on improving internal efficiency—knowledge management, drafting automation, etc.
Applied AI Roles (Client-Facing). These are also dedicated roles, meaning GenAI enablement is central to the position, not an occasional duty. Client-facing applied AI roles work directly with clients on GenAI-related offerings—helping design, co-develop, or deliver AI-enabled solutions. These may sit within innovation, solutions, or consulting teams and typically act as translators between technology and client value propositions.
Legal Technology Subsidiary with GenAI Offerings. A separate legal entity or affiliated company that provides GenAI-enabled software, tools, or managed services. These subsidiaries may commercialize internal technology, deliver stand-alone products, or partner with clients through subscription models. The defining feature is corporate distinctness—a separate P&L or brand identity dedicated to GenAI-enabled products or services.
GenAI Co-Development Collaborations with Third Parties. Joint initiatives between your organization and external partners—such as technology vendors, clients, or academic institutions—aimed at developing, testing, or scaling GenAI solutions. Co-development counts whether the goal is research, product creation, or workflow innovation. What matters is that it involves shared design, shared data, or shared delivery, not mere vendor licensing.
Each of these categories captures a different element of GenAI’s institutionalization—from strategic planning to applied execution. Not every provider will have all components.
Moreover, we recognize that structured questions like this can’t capture the full nuance of how organizations actually operate. They’re designed to create comparable benchmarks, not to tell the whole story.
This is why there’s an optional commentary box. If you feel there’s something material that this structure doesn’t surface—something that would change how your answers should be understood—please use that space to add context. It’s entirely optional and not meant to be exhaustive, but it’s your place to clarify or highlight what the checklist can’t.
3. Training, Access, and Usage. Approximate, as accurately as you can, what percentage of your client-facing personnel (timekeepers) fit into each category below.
We’re focused on intentional use of firm-provided generative systems (e.g., prompting, drafting, summarizing, or analysis using GenAI tools)—not passive exposure to ambient AI features that now exist in many applications. Prompting an AI-based legal research tool to produce a memo is deliberate use of a generative system. Running a traditional search in a case law database is not, even if the search is now AI-enhanced and case summaries are now AI-generated. While the latter matters, AI is becoming so ubiquitous that ambient AI would drown out the question.
Dropdown Options
Don’t Know – Not tracked in a manner that enables an approximation with any adequate degree of confidence
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
|
Completed Formal, Firm-Provided GenAI Training |
[DROPDOWN][DD] |
|
Have a Provisioned Seat for Firm-Provided Generalist GenAI Tool(s) |
[DROPDOWN][DD] |
|
Monthly User of Firm-Provided Generalist GenAI Tool(s) |
[DROPDOWN][DD] |
|
Weekly User of Firm-Provided Generalist GenAI Tool(s) |
[DROPDOWN][DD] |
|
Daily User of Firm-Provided Generalist GenAI Tool(s) |
[DROPDOWN][DD] |
|
Have a Provisioned Seat for Firm-Provided, Legal-Specific GenAI Tool(s) |
[DROPDOWN][DD] |
|
Monthly User of Firm-Provided Legal-Specific GenAI Tool(s) |
[DROPDOWN][DD] |
|
Weekly User of Firm-Provided Legal-Specific GenAI Tool(s) |
[DROPDOWN][DD] |
|
Daily User of Firm-Provided Legal-Specific GenAI Tool(s) |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question examines the human footprint of your GenAI adoption—how widely GenAI training, access, and actual use have reached across your client-facing workforce.
We distinguish between training, access, and usage:
Training means structured, firm-provided instruction (curriculum, certification, onboarding modules).
A Provisioned Seat is a license that has been (i) assigned to a specific user, (ii) technically enabled (login credentials issued and activated), and (iii) communicated to the user as available for use. Purchased but unassigned licenses, dormant accounts, or bulk licenses held in reserve do not count as provisioned seats.
Usage means deliberate engagement—actually prompting or relying on GenAI to generate, draft, or analyze content, as part of daily or weekly work.
We intentionally exclude ambient AI (e.g., auto-summarization, smart search) that users might encounter passively. That functionality matters, but the purpose here is to measure the intentional integration of GenAI into substantive legal and client-facing work. Further, ambient AI is becoming so ubiquitous—and again, this matters—that it would easily swallow up the question.
We understand that firms vary in how they define “training” or “usage.” That’s expected. Please interpret each category in a way that best reflects your internal reality. What matters most is directional accuracy—whether usage is nascent, spreading, or routine.
Generalist GenAI tools are broad-purpose systems designed for use across many industries—examples include ChatGPT for Enterprise, Microsoft Copilot, Gemini, Claude, or Perplexity. These tools support a wide range of drafting, summarizing, or analytical tasks but are not tailored to legal workflows or terminology.
Legal-specific GenAI tools are domain-tuned systems built for legal applications—such as Harvey, Legora, CoCounsel, LiteraOne, or proprietary firm-tuned models. These typically connect directly to legal data sources (case law, regulations, contracts, filings, precedents) and deliver outputs aligned with professional practice standards.
A Monthly Active User is any timekeeper who intentionally uses a firm-provided GenAI tool at least once in a given month for substantive work (e.g., prompting, drafting, summarizing).
A Weekly Active User is someone who uses such tools at least once per week, reflecting regular but not constant engagement.
A Daily Active User is a timekeeper who relies on GenAI tools in day-to-day work, typically integrating them into core workflows (e.g., research, drafting, review).
We recognize that tracking this data can be difficult, especially when usage occurs across different tools, departments, or licensing arrangements. The inability to know precise figures does not imply inactivity or your immaturity—it may reflect limitations of the tooling. Still, the presence or absence of visibility tells us something meaningful about where the market stands: how much GenAI is being deployed without systematic measurement.
Finally, we recognize that structured questions like this can’t capture every nuance. They’re designed to benchmark at scale, not describe every variation of implementation.
That’s why there’s an optional commentary box. If you think there’s a material aspect of your training, access, or usage reality that these categories don’t reflect, feel free to add it there. It’s entirely optional and not meant to be exhaustive, but it’s your space to clarify what the numbers alone can’t.
4. Default Policies. Absent a client-communicated policy, what are your organization’s default policies and client-notice practices governing the use of GenAI in client work? Please indicate your organization’s default policy and corresponding client-notice approach for each use case below.
Policy Dropdown Options
No Policy (no formal policy currently in place)
Prohibit (usage is not permitted under any circumstances)
Opt-In (usage is prohibited unless a client expressly consents)
Opt-Out (usage is permitted unless a client expressly declines)
Permit (usage is allowed in all client matters by default)
Not Uniform (no single default; policies vary across clients, practices, or more nuanced use cases)
Notice Dropdown Options
Silent (do not proactively notify clients of our policy)
Communicate (proactively notify clients of our policy)
Not Uniform (no single default; notice policies vary across clients, practices, or more nuanced use cases)
|
Policy |
Notice |
|
|---|---|---|
|
Public Tools, Confidential Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Public Tools, Confidential Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Private Tools, Nonconfidential Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Private Tools, Confidential Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Use Client Confidential Info to Train Our Models Only for Their Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Use Client Confidential Info to Train Our Models for General Work |
[DROPDOWN][DD] | [DROPDOWN][DD] |
|
Use GenAI for Legal Research |
[DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question captures your default governance baseline—how you handle GenAI use in client work when no specific client policy applies, and how you communicate (or don’t) about that position.
We recognize that real-world policies are rarely uniform. Different practices, regions, or client segments may operate under different rules. That’s fine—and analytically valuable. This section simply seeks to understand your default operating environment, not to force one-size-fits-all answers.
“Default policy” refers to your standing internal rule on GenAI use for client matters in the absence of specific client instructions. “Notice” refers to whether clients are proactively informed of that default.
You’ll see paired dropdowns for each use case (e.g., public vs. private tools, confidential vs. nonconfidential work). The intent is to build a composite picture of how providers are managing both risk and communication around GenAI in practice—not in theory.
Public tools refer to broadly available, consumer-grade GenAI systems such as ChatGPT, Claude, Gemini, or Perplexity—tools that operate in shared cloud environments and are not specifically configured or secured for any one organization.
Private tools are enterprise or restricted-access systems—for example, ChatGPT Enterprise, Microsoft Copilot, Harvey, Legora, LiteraOne, or proprietary in-house models. These are designed to provide enhanced security, administrative control, and contractual assurances around data handling.
Nonconfidential work means tasks or materials that do not contain client confidential information and could be safely shared, summarized, or processed without risk of breaching confidentiality—e.g., public filings, marketing copy, or internal templates.
Confidential work involves any content containing or derived from client confidential information—that is, information protected by privilege, contract, or duty of confidentiality, whether or not formally labeled as such. This includes matter-specific documents, client data, advice, and communications.
When we refer to using client confidential information to train models, we mean fine-tuning or reinforcement training where that data is incorporated into a model’s underlying parameters or embeddings—so that the model’s future outputs may be influenced by it.
We recognize that structured questions like this can’t capture every nuance or variation. They’re designed to benchmark at scale, not to account for every edge case.
That’s why there’s an optional commentary box. If there’s something material these categories don’t capture—for instance, policy differences between lines of business, risk tiers, or client contract terms—feel free to include it there. It’s not meant to be exhaustive, but it’s your place to clarify or highlight what the dropdowns miss.
5. Anything Else? This optional catch-all question leaves space for—but does not require—information, observations, or opinions not elicited above that you consider important to share regarding your organization’s usage of GenAI to deliver legal services.
ANNOTATION: This question provides open space for input that doesn’t neatly fit into earlier categories but may be valuable to the broader dialogue. It’s entirely optional. Even brief observations can highlight areas meriting deeper exploration in the future.
De-Identified Section (Not Shared in an Attributable Way)
Your responses in this section are not disclosed to the Requesting Client in an identifiable, provider-attributable way, and they are not shared with other third parties. These responses are used only in de-identified and/or aggregated benchmarking—including (i) client-specific benchmark reporting (subject to minimum thresholds and dataset-completeness transparency) and (ii) the program-wide composite market report you receive as a participant.
Confidentiality is meant to create the conditions for candor. Candor matters because the point is to surface the root causes of client-provider misalignment at the market level—without putting any individual commercial relationship at risk. Please answer in good faith and in proportion to reality: There is no finish line—and even if there were, no organization (including yours) is anywhere near it.
6. General Commercial Direction. Which one of the following scenarios do you expect to be closest to the commercial direction of the legal market for providers like you over the next 36 months? [check one]
|
Demand Down |
Demand Flat |
Demand Up |
|
|---|---|---|---|
|
Economics Unchanged ⎯ |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question is designed to situate GenAI within the bigger picture of market economics. We are asking you to forecast not just your own business but the overall commercial direction of the legal market over the next three years.
The structure forces a choice across two axes:
- Demand for Legal Services → rising, falling, or flat
- Economics of Delivery → unchanged vs. transformed
That yields six distinct combinations, each carrying a different implication:
- Demand Down/Economics Unchanged → The most pessimistic outlook. Here, overall demand for external legal services contracts, but the fundamental business model remains intact. GenAI may help clients do more in-house or cut volume, without meaningfully changing how providers price or deliver.
- Demand Down/Economics Transformed → A “disruption” scenario. GenAI reduces demand for traditional services (insourcing, automation, disintermediation), while also forcing fundamental change in provider economics, as staffing, leverage, and pricing models all shift.
- Demand Flat/Economics Unchanged → A conservative baseline. GenAI adoption produces efficiency gains that are absorbed within existing structures. Providers continue to operate under familiar economic models, and market demand neither grows nor contracts in a material way.
- Demand Flat/Economics Transformed → A stability-plus-disruption mix. Demand for services stays roughly level, but how providers deliver those services changes significantly (e.g., more technology, different staffing, new fee structures). The pie is the same size, but it is baked differently.
- Demand Up/Economics Unchanged → An “expansion without disruption” scenario. Demand for legal services grows (perhaps due to regulatory complexity, risk, or new legal domains), but GenAI efficiency gains are captured within traditional delivery models. Economics improve incrementally but not fundamentally.
- Demand Up/Economics Transformed → The most expansive view. GenAI both enlarges the market (by creating new categories of work or lowering costs that stimulate demand) and reshapes its economics (new players, new pricing, new workforce models). This is the high-change, high-growth scenario.
We recognize that reality will not be uniform across the industry—different segments, geographies, and practices will move differently. Still, we ask you to select the combination that best reflects your baseline expectation for the market as a whole.
7. General Commercial Impact. What is your perspective on how the automation push surrounding GenAI has affected, and will affect, the legal market overall? How might increased automation proportionally, or not, enhance law departments’ capacity to meet demand internally? How do the related resourcing decisions cascade, or not, to external providers like you?
Total Demand is the demand placed on the law department to meet the broader organization’s substantive legal needs. In one world, demand decreases due to upstream automation. In another world, demand increases because of greater business velocity, ratcheting expectations, and/or legal complexity (legal issues raised by GenAI and the accompanying compliance burden). In between, demand could be flat because GenAI has no impact or because demand impacts offset.
Total Legal Spend is total fiscal resources allocated to the law department accounting for the interplay of the volume of legal work, the automation of legal work, and organizational perspectives/perceptions/politics.
Internal Legal Spend is total fiscal resources allocated to the law department’s own delivery of legal services to meet the broader organization’s substantive legal needs.
Internal Tech Share is the share of internal legal spend directed toward technology. For our purposes, it is “internal” as long as it hits the law department’s budget—even if the law department is paying an external tech provider. It is a ratio, not a raw number. The share (the percentage of budget) can increase even if spend itself decreases, and vice versa.
External Legal Spend is the total fiscal resources allocated to external providers like you to meet the broader organization’s substantive legal needs.
External Tech Share is the share of external legal spend directed by providers like you toward technology. It is a ratio, not a raw number. The share (the percentage of budget) can go up even if spend itself goes down.
Again, the question is about the market-wide impact of GenAI. Separating out other factors (e.g., demand growth due to other drivers), how does the perceived inflection point around GenAI affect, or not, law departments and, in turn, external providers like you?
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
|
To Date |
Next 12 Months |
Next 36 Months |
|
|---|---|---|---|
|
Total Demand |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Total Legal Spend |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Internal Legal Spend |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Internal Tech Share |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
External Legal Spend |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
External Tech Share |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
Dropdown Options
Heavy Increase = > 30%
Modest Increase = 10–30%
Light Increase = < 10%
Flat
Light Decrease = < 10%
Modest Decrease = 10–30%
Heavy Decrease = > 30%
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question asks you to make your assumptions visible. We’re mapping how organizations are thinking, reacting, and reallocating under conditions of extreme uncertainty—because those reactions are already shaping the market.
Right now, no one truly knows the magnitude or direction of GenAI’s commercial impact on the legal market. Every organization, every client, every provider is navigating a fog of partial information and shifting expectations. The only certainty is that uncertainty itself has intensified.
Even so, decisions are already being made. Budgets are being reallocated. Hiring plans are being revised. These moves are driven as much by expectations as by evidence—expectations about efficiency, automation, new market realities, and attendant client pressure. Whether those expectations prove right or wrong, they’re already producing real, measurable consequences.
This section is designed to surface theories of the case that are guiding decisions—explicitly or implicitly. How do you believe the legal market is evolving under the combined pressure of GenAI, automation, and client demand for efficiency? How are you, your peers, and your clients behaving as if certain futures are more likely than others?
Your answers here reflect your operating assumptions. We expect those assumptions to be uncertain—and yet to guide your decisions nonetheless, because uncertainty is unavoidable and decisions must be made regardless. The fact is that most of us are still deciding in the dark while trying to appear deliberate.
Finally, if your perspective doesn’t fit neatly into the structured options, you have a commentary box. Use it if you wish to describe dynamics that the dropdowns can’t capture. It’s entirely optional.
8. Commercial Impact on Your Organization. How has GenAI integration into legal service delivery affected your organization in terms of revenue, margin, and headcount thus far—and how do you project that impact to evolve over the next 12 months and 36 months?
Direction Dropdown Options
None (no impact)
Negative (net reduction)
Neutral (offsetting impacts)
Positive (net increase)
Magnitude Dropdown Options
None = no impact
Minimal = < 5%
Moderate = 6–15%
Material = 16–30%
Major = 31–50%
Massive = > 50%
|
To Date |
Next 12 Months |
Next 36 Months |
||||
|---|---|---|---|---|---|---|
|
Direction |
Magnitude |
Direction |
Magnitude |
Direction |
Magnitude |
|
|
Revenue |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Margin |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Headcount |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question turns inward—asking not what’s happening in the market at large but how GenAI is showing up in your own organization’s commercial realities and expectations.
As with the previous question, it’s important to start with the premise that no one—truly, no one—knows with certainty how GenAI will affect revenue, margin, or headcount. We’re all operating amid heightened uncertainty and rapid change, with fragmentary evidence and mixed signals. This is not a test of predictive accuracy. The value lies in surfacing how leadership currently thinks about GenAI’s commercial effects—the operating narrative that shapes how resources are being allocated, what risks you’re managing, and what opportunities you’re prioritizing.
The technology itself is only part of the story. Even if today’s capabilities fall short of the hype—or exceed it—the expectations surrounding GenAI are already influencing behavior. Organizations are making choices about investment, staffing, pricing, and automation strategy based not only on what’s proven but also on what’s believed to be imminent. In other words, expectation is a driver of action.
Within that context, the real questions become: What is your organization’s working theory? How do you and your peers project the likely commercial consequences of GenAI integration, internally and externally? Are you positioning for growth, preparing for margin compression, or restructuring for efficiency?
We expect your answers to blend perception, partial data, and strategic assumption. That’s appropriate. Decisions are being made amid uncertainty—and those decisions, right or wrong, are already shaping organizational outcomes.
Your responses will help illuminate how providers are adapting under uncertainty. We’re seeking to understand the mental models guiding behavior: whether leaders view GenAI as a tailwind, a headwind, or both. What matters here is perspective, not proof.
The optional commentary box is provided to let you elaborate. You can use it to describe your organization’s internal logic or what kinds of evidence (e.g., client behavior, pricing shifts, productivity data) are shaping your view. It’s not mandatory, but it’s the ideal place to surface nuance that structured responses can’t capture.
9. Commercial Opportunities and Threats. Thinking about the commercial impacts of GenAI described in the preceding questions, please identify the primary commercial opportunity and the primary commercial threat affecting your organization for each of the three time periods mentioned below. Select one primary opportunity and one primary threat per time period. After identifying the primary items, please indicate the extent to which the remaining opportunities and threats have affected—or are expected to affect—your organization (i.e., use “PRIMARY” once per column).
“Opportunities” capture how GenAI may expand demand, open new markets, or enhance pricing power. “Threats” capture how GenAI may shrink demand, intensify competition, or undermine margins. This exercise is directional, not precise, hence the broad dropdown categories.
Dropdown Options
None/No Opinion (no observable impact)
Marginal (trivial or peripheral impact)
Material (noticeable, meaningful, and recurring)
Transformative (significant, structural change)
PRIMARY (biggest opportunity; biggest threat)
|
To Date |
Next 12 Months |
Next 36 Months |
||
|---|---|---|---|---|
|
Opportunities |
Increased Outsourcing |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
More Hours Paid For |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Higher Rates |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Returns to Scale/More Lucrative Fee Structures |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Higher Realizations |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
New Practice Areas/Matter Types |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Differentiation from Peers |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Entry into New Segment(s) |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Clients More Willing to Allocate Work to You |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Accelerated Upskilling |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
Next 12 Months |
Next 36 Months |
||
|---|---|---|---|---|
|
Threats |
Increased Outsourcing |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Fewer Hours Paid For |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Lower Rates |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
No Returns to Scale/Less Lucrative Fee Structures |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Lower Realizations |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
New Entrants into Your Segment(s) |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Differentiation by Peers |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Homogenization |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
New Provider Types |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Clients More Willing to Move Work from You |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Clients Demand/Capture GenAI-Enabled Savings |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Deskilling |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question gets more granular. It looks at the crosswinds driving the broader commercial patterns explored in the last three questions.
In Question 6, you took a wide-angle view of market direction.
In Question 7, you considered GenAI’s aggregate effects on overall demand and spend.
In Question 8, you reflected on how those dynamics are showing up inside your own organization.
Now, Question 9 drills into the specific opportunities and threats that together shape those outcomes—the component forces behind your organization’s commercial trajectory.
Again, we start from the premise that no one truly knows how this will play out. These are impressionistic judgments, not factual assertions. The exercise isn’t about forecasting accuracy; it’s about surfacing your working theories—the perceptions and assumptions guiding your real-world decisions.
Every organization is making choices in a fog of partial evidence. Yet decisions about pricing, staffing, investment, and positioning are being made nonetheless. This question helps make those underlying assumptions explicit. What are you treating as headwinds, and what things feel like tailwinds? Where do you sense new potential value, and where do you feel pressure or erosion?
We expect your answers to blend partial data, client anecdotes, internal observations, and instinct. That’s precisely what we’re trying to capture: the felt reality of operating under uncertainty.
Before diving into the individual items, it’s important to note that these factors don’t move independently. Some reinforce each other; others conflict. You can have multiple, even contradictory forces operating at once.
For example, more hours paid for and higher rates both assume the continuing dominance of the billable-hour model—yet returns to scale or new fee structures may assume its erosion. Similarly, an organization might simultaneously gain from more hours billed in certain practices while achieving higher realizations or greater leverage in others where GenAI enables outcome-based or fixed-fee models.
This question intentionally separates these items to make your reasoning visible—which theories you implicitly hold about how GenAI reshapes commercial mechanics. You might believe the market will remain fundamentally driven by billable hours, with GenAI simply accelerating volume and velocity. Or you might expect a structural shift toward more scalable, tech-leveraged models that decouple revenue from time. Both are viable perspectives, and many respondents will hold a mix of them.
The opportunities and threats are meant to be mirrors—each opportunity has a corresponding risk viewed from the opposite direction. The same dynamics that create openings for some providers can generate pressure for others, depending on market position, client mix, and timing.
We ask you to identify the primary commercial opportunity and the primary commercial threat for each of the three time periods covered in the question. By “primary” we mean the opportunity or threat that, in your judgment, has had (or will have) the most material influence on GenAI’s commercial impact for your organization during that specific period.
Your primary opportunity and primary threat may change across time periods, or they may remain consistent. You may find that one opportunity has been most significant to date, another will be most important over the next 12 months, and a different one will take precedence over a longer 36-month horizon. Alternatively, the same opportunity or threat may lead throughout. All of these patterns are plausible. The aim is not prediction with precision, but a clear view of how you believe these forces will sequence and evolve as GenAI capabilities and market dynamics develop.
After selecting one primary opportunity and one primary threat for each period, you are asked to indicate the extent to which the remaining opportunities and threats have affected—or are expected to affect—your organization. This two-step structure helps distinguish the forces that sit at the center of your commercial experience from those that exert more modest but still meaningful influence.
Opportunities
Increased Outsourcing. Superior external GenAI capabilities may create conditions in which clients choose to shift more work to external providers. This is not simply a matter of “more hours paid for.” It reflects the possibility that providers with demonstrably stronger GenAI-enabled workflows, quality assurance, throughput, or cost-efficiency become comparatively more attractive as execution partners. Importantly, increased outsourcing can occur even in environments where total demand remains flat or even declines—particularly if clients perceive that external capabilities deliver meaningfully better outcomes or materially reduce internal strain. For these reasons, increased outsourcing is treated as a distinct pathway rather than a derivative of changes in total hours.
More Hours Paid For. GenAI could increase total paid legal work not only by accelerating business activity but by creating new legal complexities. Regulations governing AI, data use, and automation are proliferating globally, and every enterprise will need counsel on compliance, contracting, and governance. In this view, GenAI doesn’t eliminate work; it generates new categories of it. The persistence of the billable-hour model supports this logic: As legal complexity expands, time spent advising and negotiating around GenAI expands with it.
Higher Rates. True expertise becomes more valuable in a world where baseline knowledge becomes universally accessible. Further, if GenAI enables faster, higher-quality, or more sophisticated outputs, some providers may command premium pricing for that enhanced capability. The assumption here is that clients are willing to pay for demonstrable value and that GenAI strengthens—not weakens—the case for premium expertise. This can coexist with, or even depend upon, the continuation of time-based pricing: Clients pay higher rates for the true experts that AI cannot replace or more for “enhanced hours” where automation underlies the work.
Returns to Scale/More Lucrative Fee Structures. This view holds that GenAI allows revenue to grow faster than headcount—creating efficiency gains that improve margins or enable non-hourly, value-based pricing. It assumes work can be modularized, standardized, or productized at scale. This can complement or contrast with the previous models: A provider might still bill hours in some segments while building alternative-fee or subscription models elsewhere. In short, hours may shrink (in raw numbers or as a share of revenue-generating activity), but profitability may rise.
Higher Realizations. Automation and improved workflow discipline reduce rework, write-offs, and unbillable time. Even if rates or volumes remain stable, collections improve because work is cleaner and more defensible. This view assumes operational impact before structural transformation—GenAI as an efficiency layer that tightens margins without upending pricing.
New Practice Areas/Matter Types. GenAI generates entirely new domains of legal work—AI governance frameworks, model-risk management, data-use compliance, IP around algorithmic outputs, and cross-border data transfer regimes. These areas are likely to expand significantly as legislation and enforcement increase. In this model, new demand offsets any commoditization of existing work.
Differentiation from Peers. Organizations that operationalize GenAI credibly can use it as a visible market signal—winning more mandates or attracting new client segments. Even if measurable ROI is uncertain, perceived capability drives commercial advantage. Differentiation becomes both a competitive and reputational asset.
Entry into New Segment(s). Automation can lower delivery costs, making it viable to compete in markets or client tiers that were previously uneconomic. Conversely, GenAI might allow expansion upward—eroding brand advantage and creating opportunities for challengers who work differently. Either direction broadens the addressable market.
Clients More Willing to Allocate Work to You. As GenAI reframes efficiency and innovation narratives, clients may reevaluate incumbents. Providers seen as ahead of the curve can capture reallocated spend from competitors perceived as lagging. This theory depends on client perception and their willingness to act on it.
Accelerated Upskilling. GenAI could compress training curves. Professionals become capable of higher-level work sooner, enabling leaner staffing and deeper leverage. Over time, this can enhance margins and scalability while improving talent retention through more engaging work.
Threats
Insourcing. Internal GenAI capabilities may enable clients to retain more work in-house. This is not merely the inverse of outsourcing, nor is it reducible to “fewer hours paid for.” Insourcing can rise even in scenarios where external spend also increases if overall demand expands or if internal capacity gains are unevenly distributed across work types. What matters here is the directional shift in client behavior: the degree to which GenAI tools improve internal productivity, confidence, and economics enough for clients to keep work that might previously have gone to external counsel. Because insourcing operates through its own mechanisms—organizational capabilities, budget decisions, adoption maturity—it warrants separate treatment as a distinct dynamic rather than a subset of pricing or volume changes.
Fewer Hours Paid For. This model assumes the billable hour erodes as automation replaces time (does the work) or displaces time (clients pay for outputs, not effort).
Lower Rates. Clients push to capture perceived productivity gains, arguing that GenAI-enabled efficiency should translate into lower average rates. This reflects a value-capture inversion—even where outputs improve, price expectations fall. The assumption is that clients view GenAI as a cost-saving tool.
No Returns to Scale/Less Lucrative Fee Structures. Efficiency gains fail to improve margins because clients capture the resulting savings or pricing pressure outweighs productivity improvements.
Lower Realizations. Perceived efficiency leads clients to heighten their scrutiny and demand more discounts, write-downs, and capped fees.
New Entrants into Your Segment(s). Just as GenAI may allow your organization to expand into new client tiers or adjacent markets, it also enables others to move into your space. If GenAI lowers barriers to scale, standardizes core capabilities, and reduces dependence on deep institutional infrastructure, then both traditional and nontraditional competitors—including ALSPs, consultancies, technology companies, and even clients themselves—can credibly compete for work that once required your footprint, brand, or legacy infrastructure.
Differentiation by Peers. Early adopters of GenAI who achieve visible, credible wins—whether through client-facing innovation, measurable efficiency, or strong public narrative—can reset market expectations for everyone else. Even modest pilot successes, if communicated effectively, can change the baseline for what clients believe is possible or expected. This can lead to expectation inflation across the market: Clients begin benchmarking every provider against the best stories they’ve heard, even if those examples are isolated, exaggerated, or unrepeatable. For organizations still in earlier phases of adoption, this dynamic can be destabilizing.
Homogenization. As tools standardize, capabilities converge. Differentiation declines, commoditization rises, and client procurement becomes more price-driven.
New Provider Types. AI-native platforms and service intermediaries emerge between clients and traditional providers, capturing workflow and data visibility. This creates disintermediation risk, particularly for routine or repeatable work.
Clients More Willing to Move Work from You. Incumbency advantages weaken as clients gain confidence experimenting with GenAI-enabled alternatives. Relationship stickiness declines, switching costs drop, and client bargaining leverage increases. Providers must continuously justify value amid shifting expectations.
Clients Demand/Capture GenAI-Enabled Savings. Clients explicitly seek to translate efficiency claims into lower bills or rebased budgets. Even when the efficiency gains are theoretical, the narrative itself becomes leverage in negotiations.
Deskilling. Automation reduces opportunities for early-career professionals to learn through manual work. This can threaten long-term capability development and succession planning, creating future talent risk even as short-term efficiency rises.
Taken together, the foregoing opportunities and threats form the crosscurrents of GenAI’s commercial impact. They are not forecasts; they are vectors of perception—the pressures and potentials shaping strategic behavior across the ecosystem. We expect your assessment to be impressionistic. The market is still forming, and even conflicting responses are valuable. The objective is to map how the industry is thinking and feeling its way forward.
These are not predictions—they’re signals of the assumptions, anxieties, and ambitions driving real decisions under extreme uncertainty.
The optional commentary box is there for you to identify additional opportunities or threats that you believe belong on this list but aren’t reflected in the structured response options. We know the landscape is evolving faster than any fixed framework can capture, and new crosswinds are emerging all the time—technological, regulatory, cultural, and competitive. Highlighting those unlisted dynamics helps the broader project stay current and complete. If you wish, you may elaborate on your structured responses—to clarify or contextualize your selections.
10. Your GenAI Orientation and Client Expectations. Your organization’s current GenAI adoption pattern would be best characterized as:
- Wait and See (intentionally avoiding deep engagement until value and safety clearly proven)
- Preliminary Engagement (limited, informal exploration or discussion underway)
- Initial Integration (incorporating GenAI into select workflows)
- All Deliberate Speed (pursuing scaled deployment as expeditiously as prudence permits)
- All In (moving aggressively despite heightened risk of rework or disruption)
Dropdown Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
|
Clients Expect Your Posture to Be |
Share of Clients |
|---|---|
|
Wait and See |
[DROPDOWN][DD] |
|
Preliminary Engagement |
[DROPDOWN][DD] |
|
Initial Integration |
[DROPDOWN][DD] |
|
All Deliberate Speed |
[DROPDOWN][DD] |
|
All In |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question bridges your internal orientation and your perception of client expectations. It asks not what you know for sure, but what you assume and act on—because those assumptions already shape how you operate.
We know—and want to make explicit—that no organization has inventoried every client to determine how each one expects you to use GenAI. You’re not expected to have done that, and you’re not expected to do it now. Even the most sophisticated providers can’t pinpoint, with any precision, where every client stands, especially because many clients have been silent. The point is not measurement; the point is impression.
Every organization operates within an environment of incomplete information. You make choices, set priorities, and calibrate communication strategies based on what you believe your clients expect—beliefs formed through partial conversations, selective signals, and accumulated intuition. Those beliefs may be explicit (“Clients urge us to move quickly”) or implicit (“Clients will penalize us if we look reckless”). Either way, they shape your actual behavior.
This question seeks to make those operating assumptions visible. How do you believe your clients expect you to approach GenAI today? Do they want you to be cautious, experimental, accelerated, or aggressive? How do those perceived expectations influence your actual orientation—whether you’re more hesitant or more forward-leaning than you might otherwise be?
We recognize that clients are not monolithic. Some are enthusiastic, some skeptical, some contradictory. Your client base will inevitably distribute across a spectrum, and over time, those distributions will shift unevenly. The question’s broad ranges are designed to reflect that reality. You’re not quantifying; you’re describing the shape of the pattern as you perceive it.
Your perceptions include filling in the blanks where clients are silent, which many are. This question does not offer a “Don’t Know” option because, in practice, you cannot operate as if you don’t know. You still make assumptions that guide how you communicate, allocate resources, and anticipate client needs. Whether or not those assumptions are accurate, they shape your real behavior.
When we say your selected share ranges should plausibly sum to 100%, we mean that the overall distribution of your responses should make numerical and logical sense as a representation of your client base. In other words, if your selections were translated into percentages, the sum total of the ranges would include 100%. For example, the summed range of selecting “Few” in all five categories would be 5–50% (i.e., 1–10% x 5), meaning at least half your client base is unaccounted for. Likewise, the summed range of selecting “Majority” in all five categories would be 255–375%, reflecting the numerical and logical reality that a majority of your clients cannot fall into more than one of the mutually exclusive categories. The goal isn’t arithmetic precision; it’s internal consistency. The pattern of your responses should feel like a real distribution, not a contradiction.
Finally, if you find the structured categories too limiting, use the optional commentary box to elaborate on your responses—to clarify, contextualize, or nuance your impressions.
11. Client GenAI Posture, Pressure, and Engagement. Please use whatever manageable sample of clients provides a directionally useful approximation of your client base (for example, your 100 largest clients or another meaningful cross section) to distribute your clients across the categories below.
The goal is not precision but coherent estimation. Your selections should reflect your impressionistic sense of where clients generally fall across the categories. While broad ranges are provided for ease of selection, they should be mutually consistent—in other words, the combined shares you select could theoretically sum to 100%.
Please reflect on both where your clients are today and how you expect that distribution to evolve over the next 12 and 36 months. Think in directional terms. For example, do you currently expect that if you looked back 12 months from now, enough clients would have shifted from one category to another to change the overall distribution?
Client Posture refers to your clients’ communicated stances on your use of GenAI in delivering their legal work (e.g., prohibitive, permissive, encouraging, silent, or mixed).
Client Engagement reflects how clearly and consistently clients communicate their GenAI expectations—i.e., whether their approach is coherent, sustained, and effective in shaping your organization’s behavior.
Clients Internal Pressure refers to your subjective perception of the internal pressure your clients’ law departments are under to deliver GenAI-related economic benefits—such as savings. We acknowledge that, along this dimension, you can’t know this with any degree of certainty (which is part of why we have a corresponding client survey). Rather, this question seeks to surface your perceptions of your evolving operating environment, i.e., your organization’s working theory currently guiding your decisions.
Client Pressure Applied seeks to understand whether the GenAI-related pressure your clients’ law departments are under translates, or not, into commercial demands on your organization.
Dropdown Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Client Posture |
Prohibit |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Discourage |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Silent |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Encourage |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Nuanced |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Contradictory Messages |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Client Engagement |
Absent |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Superficial |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Emerging |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Effective |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Advanced |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Inconsistent |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Client Internal Pressure |
Negative Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
No Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Talking Stage |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Light Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Moderate Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Heavy Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Client Pressure Applied |
Negative Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
No Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Talking Stage |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Light Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Moderate Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Heavy Pressure |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question turns outward—from your organization’s own orientation to the environment in which you operate. It asks how you perceive your clients’ behavior toward you when it comes to GenAI: what they communicate, how they engage, and the kinds of pressure or permission they exert. In most dimensions, this is about where your clients stand vis-à-vis your organization. The one partial exception is Client Internal Pressure, which refers to the forces acting inside your clients’ organizations that shape how they, in turn, behave toward you.
As with the prior questions, you are not expected to know any of this with precision. No provider has inventoried every client, and no one will. Even if only a small fraction of your clients have voiced explicit views, those fragments, patterns, and silences still inform the assumptions that guide your real-world decisions. This question simply makes those operating assumptions visible.
Use whatever representative slice of your client base gives you a reasonable sense of the whole—your 100 largest clients, your most active relationships, or another cross section you know well. This is a distribution exercise: Clients are not uniform or monolithic. They occupy different positions across the spectrum and will move through these categories unevenly over time.
Client Posture. This dimension captures your clients’ stated stances on your use of GenAI in their matters—what has, in fact, been communicated. Unlike the prior question, which asked you to infer what clients expect of you even if they have said nothing (offering no “Don’t Know” option), Client Posture focuses only on what you have actually heard or received—policies, directives, permissions, or prohibitions.
Here, silence itself is an answer category, because many clients have not said anything at all. That silence is part of the operating reality: It indicates where clarity is lacking, not where assumptions must be filled in. Where the Expectations in Question 10 captured your working theory of how clients think you should behave, Client Posture here captures what those clients have explicitly communicated—including, importantly, the absence of communication. The two overlap, but they are not the same, and the critical distinction is why we ask about both.
Client Engagement. This dimension measures how actively, consistently, and effectively your clients communicate their GenAI posture. Client Engagement is not only about whether clients have spoken but about the depth, clarity, and continuity of that dialogue. Some clients may have issued one-off directives and gone quiet, while others may be sustaining regular, structured conversations about GenAI implications and opportunities.
Here, we’re interested in how much ongoing engagement you experience—how often and how substantively GenAI arises in your interactions. High engagement does not necessarily mean agreement or enthusiasm; it simply means the topic has become a standing part of the relationship. The range from “Absent” to “Advanced” reflects the level of dialogue, not its direction.
Clients Internal Pressure. This dimension captures your perception of the internal forces acting on the law departments at your clients—the business, technological, or organizational pressures that influence how they engage with you about GenAI. Unlike Client Posture, which focuses on what clients have said, or Client Engagement and Client Pressure Applied, which capture what they do, Client Internal Pressure asks what you believe is driving that behavior beneath the surface.
We recognize this is, by definition, inferred rather than observed. Most clients have not explicitly discussed the pressures shaping their internal priorities. Still, you are constantly reading between the lines, interpreting intensity, speed, and tone. When clients are highly engaged or prescriptive, you may reasonably assume strong internal pressure for productivity, cost reduction, or digital transformation. When clients are silent or tentative, you may infer the opposite—or simply a lack of urgency.
This question does not offer a “Don’t Know” option because, in practice, you cannot operate as if you don’t know. You still make assumptions that guide how you communicate, allocate resources, and anticipate client needs. Whether or not those assumptions are accurate, they shape your real behavior. In many cases, the most powerful signals are implicit: the urgency in a conversation, the questions clients ask (or don’t ask), the pace of their decision-making.
This category sits conceptually upstream of the others. The internal dynamics you perceive—real or assumed—help explain the client posture, engagement, and external pressure you experience. Understanding those drivers, even imperfectly, is essential to navigating your operating environment.
Client Pressure Applied. This dimension measures how strongly clients are translating their internal priorities into external expectations—how directly their internal dynamics around GenAI are now influencing their commercial interactions with you. It captures the degree to which clients are turning internal posture and policy into action: embedding requirements in procurement, asking specific questions about GenAI in RFPs, or conditioning selection and pricing on demonstrable capability.
Client Pressure Applied represents the outward expression of a client’s internal reality. Even if few clients have formal policies, some are already acting as if GenAI capability is part of their selection calculus. Others are still passive, showing little interest or concern. The full range—from “Negative Pressure” to “Heavy Pressure”—reveals not just what clients are saying but also how far they’ve gone in converting talk into leverage. Undoubtedly, though, some are saying quite a lot while doing very little (“Talking Stage”).
Each set of options represents a continuum, and together they describe the operating environment in which you compete. We don’t expect your clients to cluster neatly—some will advance rapidly; others may stall or reverse.
The goal is coherence, not precision. Your selected share ranges should plausibly sum to 100%—meaning that if expressed as percentages, the total would land within the combined range of your selections. You can’t have every category marked Most, Few, or None, but multiple Some or Many responses can coexist if the pattern still feels like a real distribution.
Many clients will still be silent, absent, or applying no pressure; that silence itself is a valuable signal. The relative weight you assign to those categories shows how providers are interpreting a market that is still forming its expectations.
Ultimately, this question is about your impressionistic sense of your clients’ current and emerging behavior toward you with respect to GenAI—how they’re responding to it, and how their internal realities are shaping what they ask of you. Even incomplete impressions reflect the environment in which you operate, and surfacing those perceptions is the point.
Use the optional commentary box to identify additional client behaviors or trends not captured here, or to elaborate on your structured responses—to clarify, contextualize, or nuance your impressions.
12. Better | Faster | Cheaper. For the clients that are, in fact, applying GenAI-related commercial pressure to your organization, what is your impressionistic sense of their priorities with respect to the commercial outcomes they expect from GenAI over the next 36 months?
Better: Improved quality, risk management, and business outcomes
Faster: Shorter cycle times, quicker turnaround, and higher throughput
Cheaper: Lower total legal spend for comparable outcomes
|
Weight |
|
|---|---|
|
Better |
[DROPDOWN][DD] |
|
Faster |
[DROPDOWN][DD] |
|
Cheaper |
[DROPDOWN][DD] |
ANNOTATION: This item is a weighting exercise, focused specifically on the subset of your clients that are applying GenAI-related commercial pressure. We have already asked you to identify what proportion of your client base falls into that category, so there is no need to revisit that work here. This question pertains only to that defined group—whether it represents a large share of your clients or a relatively small one.
We are also not asking you to certify what clients “really” want. Clients rarely articulate their priorities with that level of granularity. Instead, you have scattered signals that shape day-to-day decision-making: comments in commercial conversations, procurement dynamics, matter-level negotiations, pricing tensions, and the broader patterns that inform your operating assumptions. Those directional impressions are precisely what this question is designed to capture.
Allocating 10 points across the three categories reveals the shape of those impressions—it does not imply precision nor suggest that any category is unimportant. This is not about whether good outcomes are good; it is about how you believe clients are weighting the commercial upside they expect GenAI to deliver over the next 36 months.
This clarity matters. Providers often report that clients reference all three dimensions, but the degree to which each actually drives behavior is less obvious. By asking you to distribute points, we can surface specificity and better understand any expectations gaps between how clients describe their priorities and how those priorities are perceived within the provider community.
No single allocation is “right.” The purpose of the question is to support a more coherent, system-level understanding of GenAI-related commercial expectations, and to help establish a shared language for discussing those expectations with greater nuance.
While the dropdown options are imperative for comparative benchmarking, you are welcome to elaborate in the completely optional commentary box should you feel overly constrained by the structured responses.
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
13. Responsibility to Drive Change. In conversations about GenAI, clients often urge providers to be more proactive—“understand our needs, come to us with solutions, help us capture value.” Providers, meanwhile, tend to frame themselves as responsive—“tell us what you want, and we’ll deliver it.” The resulting tension is less about disagreement than diffusion of responsibility.
Thinking about the economic upside clients expect to capture from GenAI integration over the next 36 months, how should responsibility for driving that change be distributed between (i) providers proactively improving delivery, pricing, or efficiency in ways that economically benefit clients and (ii) clients explicitly defining expectations and enforcing them through their buying behavior?
Then, in practice, how do you believe responsibility will actually be distributed?
Provider Responsibility/Client Responsibility
0% Providers/100% Clients
20% Providers/80% Clients
40% Providers/60% Clients
50% Providers/50% Clients
60% Providers/40% Clients
80% Providers/20% Clients
100% Providers/0% Clients
|
How Responsibility Should Be Distributed |
[DROPDOWN][DD] |
|
How Responsibility Will Likely Be Distributed |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question explores accountability for translating GenAI’s potential into measurable, commercial outcomes benefiting clients—whether clients or providers should bear more responsibility for driving change, and how that dynamic is actually playing out.
From the client perspective, it may often seem self-evident that providers should lead. They are, after all, the experts in delivery—the ones closest to the work, the processes, the data, and the technologies that can make that work more efficient or more valuable. Clients can articulate outcomes—faster cycle times, more predictable pricing, higher quality—but it’s the providers that are best positioned to determine how to get there. Expecting clients to prescribe innovation in legal delivery is inconsistent with the professional division of labor that underpins the entire market.
Yet providers are, at their core, commercial enterprises. Like any other business, they are accountable to their own bottom line. They exist to generate profit, reward partners or shareholders, and sustain their workforce. When they invest in GenAI—licenses, infrastructure, specialized roles, training—they incur real costs that must be justified by commercial return. Expecting providers to unilaterally fund and operationalize innovation that primarily benefits clients, without clear mechanisms for reward or recovery, runs counter to basic economic logic.
This question invites a candid reckoning with alignment versus aspiration:
- “Should” reflects an idealized view of partnership—how responsibility ought to be shared if both sides acted in concert, pursuing sustainable value creation.
- “Will” reflects the market reality—how responsibility is actually likely to be distributed based on current behaviors, procurement norms, and incentive structures.
The gap between the two is diagnostic. If clients believe providers should carry much of the load but providers believe they will not be rewarded for doing so, it signals structural misalignment—where rhetoric about innovation outpaces the economic models that would sustain it. Conversely, if the gap is narrow, it suggests an emerging equilibrium in which both sides are beginning to play complementary roles: clients acting on clearly expressed expectations, and providers responding with measurable change.
Behind this lies a deeper commercial question: When and how are proactive providers rewarded?
History offers limited evidence. Despite years of client surveys extolling “innovation” and “value,” few providers can point to consistent financial upside for being early movers. Most see modest reputational benefits or incremental retention gains, not material margin improvement. When innovation reduces the number of billable hours, providers are often applauded—but rarely compensated for the efficiency they create. That dynamic breeds caution: Why rush to automate yourself out of revenue when clients still buy on inputs, not outcomes?
At the same time, providers that wait for explicit client demand risk falling behind as expectations shift. Clients’ own adoption curves—and their growing familiarity with technology-driven service models—may eventually force recalibration. The question is whether market forces will evolve fast enough to reward proactive change before risk-averse economics freeze it.
This section is designed to surface those competing theories of responsibility and reward. It connects directly to the Client Survey, where law departments are asked similar questions about their own accountability for driving GenAI-enabled change. Comparing the two perspectives will reveal whether each side believes the other should lead—and whether that standoff helps explain the current pace of transformation.
To the extent you find the structured benchmarking options too confining, you are welcome to use the optional commentary box to clarify your reasoning or describe the conditions that shift the balance. For example, do proactive providers fare better with certain client segments or pricing models? Does scale or specialization change the calculus? Even brief explanations—why you answered as you did or what would need to change for “should” and “will” to converge—add meaningful insight.
Ultimately, this question asks you to locate your position within an evolving market choreography: who leads, who follows, and under what incentives real progress becomes sustainable.
14. Cost of Use. Providers are investing in GenAI systems, licenses, and governance as well as the associated specialized personnel. These investments create internal costs that may or may not appear as distinct charges on client invoices. What percentage of your GenAI-related costs are you currently absorbing as overhead versus explicitly charging through to clients? What do you hypothesize the proper distribution should be? What is your operating assumption as to your clients’ views on the proper distribution?
Absorbed by Providers as Overhead/Explicitly
0% Providers/100% Clients
20% Providers/80% Clients
40% Providers/60% Clients
50% Providers/50% Clients
60% Providers/40% Clients
80% Providers/20% Clients
100% Providers/0% Clients
|
Current Distribution of GenAI Costs |
[DROPDOWN][DD] |
|
Provider View: Proper Distribution of GenAI Costs |
[DROPDOWN][DD] |
|
Client View: Proper Distribution of GenAI Costs |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question explores accountability for translating GenAI’s potential into measurable, commercial outcomes benefiting clients—whether clients or providers should bear more responsibility for driving change, and how that dynamic is actually playing out.
From the client perspective, it may often seem self-evident that providers should lead. They are, after all, the experts in delivery—the ones closest to the work, the processes, the data, and the technologies that can make that work more efficient or more valuable. Clients can articulate outcomes—faster cycle times, more predictable pricing, higher quality—but it’s the providers that are best positioned to determine how to get there. Expecting clients to prescribe innovation in legal delivery is inconsistent with the professional division of labor that underpins the entire market.
Yet providers are, at their core, commercial enterprises. Like any other business, they are accountable to their own bottom line. They exist to generate profit, reward partners or shareholders, and sustain their workforce. When they invest in GenAI—licenses, infrastructure, specialized roles, training—they incur real costs that must be justified by commercial return. Expecting providers to unilaterally fund and operationalize innovation that primarily benefits clients, without clear mechanisms for reward or recovery, runs counter to basic economic logic.
This question invites a candid reckoning with alignment versus aspiration:
- “Should” reflects an idealized view of partnership—how responsibility ought to be shared if both sides acted in concert, pursuing sustainable value creation.
- “Will” reflects the market reality—how responsibility is actually likely to be distributed based on current behaviors, procurement norms, and incentive structures.
The gap between the two is diagnostic. If clients believe providers should carry much of the load but providers believe they will not be rewarded for doing so, it signals structural misalignment—where rhetoric about innovation outpaces the economic models that would sustain it. Conversely, if the gap is narrow, it suggests an emerging equilibrium in which both sides are beginning to play complementary roles: clients acting on clearly expressed expectations, and providers responding with measurable change.
Behind this lies a deeper commercial question: When and how are proactive providers rewarded?
History offers limited evidence. Despite years of client surveys extolling “innovation” and “value,” few providers can point to consistent financial upside for being early movers. Most see modest reputational benefits or incremental retention gains, not material margin improvement. When innovation reduces the number of billable hours, providers are often applauded—but rarely compensated for the efficiency they create. That dynamic breeds caution: Why rush to automate yourself out of revenue when clients still buy on inputs, not outcomes?
At the same time, providers that wait for explicit client demand risk falling behind as expectations shift. Clients’ own adoption curves—and their growing familiarity with technology-driven service models—may eventually force recalibration. The question is whether market forces will evolve fast enough to reward proactive change before risk-averse economics freeze it.
This section is designed to surface those competing theories of responsibility and reward. It connects directly to the Client Survey, where law departments are asked similar questions about their own accountability for driving GenAI-enabled change. Comparing the two perspectives will reveal whether each side believes the other should lead—and whether that standoff helps explain the current pace of transformation.
To the extent you find the structured benchmarking options too confining, you are welcome to use the optional commentary box to clarify your reasoning or describe the conditions that shift the balance. For example, do proactive providers fare better with certain client segments or pricing models? Does scale or specialization change the calculus? Even brief explanations—why you answered as you did or what would need to change for “should” and “will” to converge—add meaningful insight.
Ultimately, this question asks you to locate your position within an evolving market choreography: who leads, who follows, and under what incentives real progress becomes sustainable.
15. GenAI Maturity. How mature is your organization’s use of GenAI to deliver legal services today? How mature do you expect to be after the next 12 months and 36 months?
What about your competitors? What about your clients? What are clients’ evolving expectations of your maturity?
The first part of the question asks you to rate your present and planned maturity levels using dropdown options. The remainder asks you to distribute your competitors and your clients based on your perception of their maturity, including your expectations of how your environment will evolve—e.g., do you anticipate that if you looked back 12 months from now, would enough clients have graduated from one maturity category to another to change your answer?
This question seeks to surface your perception of your operating environment—not an impossible ask for you to accurately assess the objective reality of a shifting, opaque, crowded, confusing, competitive landscape. We recognize you are unlikely to know how mature even one of your clients or competitors is, let alone all of them—though feel free to focus your reflection on your primary competitors and biggest clients. We’re simply seeking insight into your organization’s working theory of where it stands vis-à-vis the market, as well as where your organization believes both it and the market are headed in the near and medium terms.
Dropdown Options
Dormant (no formal activity; limited to individual curiosity or tinkering without organizational recognition)
Exploratory (isolated pilots or proofs of concept within a department or practice group; learning phase with no consistent framework)
Emerging (early repeatable use cases appear; governance conversations begin; some processes adjusted to integrate GenAI)
Operational (GenAI tools incorporated into regular workflows across multiple teams; usage is guided by policies and supported infrastructure)
Scaled (organization-wide adoption with high usage across many areas; systematic governance, budget allocation, and measurable business impact)
|
To Date |
After 12 Months |
After 36 Months |
|
|---|---|---|---|
|
Your Organization’s GenAI Maturity |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
Dropdown Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Competitor GenAI Maturity |
Dormant |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Exploratory |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Emerging |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Operational |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Scaled |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Client Expectations of Provider GenAI Maturity |
Dormant |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Exploratory |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Emerging |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Operational |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Scaled |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
To Date |
After 12 Months |
After 36 Months |
||
|---|---|---|---|---|
|
Law Department GenAI Maturity |
Dormant |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
|
Exploratory |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Emerging |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Operational |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] | |
|
Scaled |
[DROPDOWN][DD] | [DROPDOWN][DD] | [DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question asks you to assess maturity—yours, your competitors’, your clients’—and your clients’ expectations of providers like you. It is not about precision or omniscience; it is about how you currently see the operating landscape. No provider has perfect visibility. Most possess only partial, sometimes contradictory, information. That’s normal. What matters is the pattern of perception—the working theories that guide how you prioritize, budget, and position your organization amid uncertainty.
We recognize that no one truly knows the maturity of competitors or clients. You’re not expected to. The point is to surface your operating assumptions—the mental models through which you interpret the market. Even if those assumptions prove wrong later, they shape real decisions now: investment timing, messaging, hiring, and pricing.
We understand that even locating your own organization on this scale can spark debate. The definitions are directional: They describe recognizable stages with fuzzy boundaries. Please apply them pragmatically, not pedantically. If you fall between categories, choose the one that best fits your center of gravity today.
In distributing your competitors and clients, the goal is coherence, not precision. Your selected share ranges should plausibly sum to 100%—that is, translated into percentages, 100% would land within the combined range of your selections. You cannot have every category marked “Most” or “None,” but several “Some” or “Many” responses can coexist if the overall pattern still makes sense as a distribution.
Dormant. No formal GenAI activity. Curiosity exists, but it’s scattered—an associate tinkering, a partner experimenting privately, a technologist dabbling on the side. There is no official sanction, governance, or resource allocation. “Dormant” does not mean ignorant or indifferent; it means the organization has not yet converted interest into intentional effort. Many providers begin here, often by choice—observing, waiting, learning from early adopters before committing.
Exploratory. GenAI has entered the conversation, typically through isolated pilots or proofs of concept within one practice, region, or function. Activity is still experimental, often driven by enthusiasts rather than strategy. There may be excitement and slides but limited structure or measurement. The organization is learning by doing, testing feasibility, and surfacing use cases. Governance remains ad hoc; policies and training are nascent. “Exploratory” signals motion without yet signaling direction.
Emerging. Repeatable use cases are forming. Tools or workflows are showing real traction, and governance discussions are underway—risk management, responsible-use guidelines, training pilots, or client communications. GenAI is becoming a visible topic in leadership meetings and client interactions. Processes are being adjusted to make early adoption safer and more consistent. Momentum is uneven but recognizable; the organization is learning to scale responsibly rather than experiment endlessly.
Operational. GenAI tools are now integrated into regular workflows across multiple teams or practices. Adoption is supported by policies, training, and infrastructure rather than individual enthusiasm. Metrics and accountability start to appear: usage tracking, governance routines, budget lines. The work feels normalized: GenAI is a capability, not a novelty. This stage reflects institutionalization without saturation—the technology is embedded enough to affect delivery, but it is not yet universal.
Scaled. Organization-wide adoption with systemized governance, clear ownership, and measurable business impact. GenAI is woven through delivery models, supported by dedicated budgets and roles, and aligned to commercial outcomes. The organization manages GenAI as infrastructure: secure, auditable, and continually improved. At this level, usage is broad, measurement is routine, and benefits—efficiency, margin, differentiation—are tracked. Few, if any, providers are here yet.
If your internal debates, client feedback, or competitor observations don’t fit neatly into these buckets, use the optional commentary box to say so. You can explain why a category feels wrong, describe a hybrid state, or note the limits of what you can currently see. Though entirely optional, it’s the place for the nuance these dropdowns can’t communicate.
16. GenAI Obstacles. Please identify your organization’s primary obstacle in adopting or scaling GenAI. Then rate the seriousness of the remaining obstacles based on how much each slows, complicates, or limits your progress. This question is about what feels most binding today—what stands most in the way of your GenAI ambitions, whether practical, cultural, technical, or commercial.
Dropdown Options
Not an Obstacle (not currently limiting progress or decision-making)
Minor Obstacle (noticeable friction, but easily managed or worked around)
Material Obstacle (substantive barrier that slows meaningful progress)
Major Obstacle (significant constraint requiring leadership attention)
PRIMARY Obstacle (single most serious factor currently impeding your GenAI advancement)
|
GenAI Not There Yet |
[DROPDOWN][DD] |
|
Search-to-Implementation Speed/Costs |
[DROPDOWN][DD] |
|
Unclear ROI/Business Case and Attendant Budget Constraints |
[DROPDOWN][DD] |
|
Change Management and Cultural Resistance |
[DROPDOWN][DD] |
|
Talent and Skills Gap |
[DROPDOWN][DD] |
|
Data Readiness and Quality |
[DROPDOWN][DD] |
|
Legacy Systems and Integration Difficulties |
[DROPDOWN][DD] |
|
Information Security and Client Confidentiality |
[DROPDOWN][DD] |
|
Regulatory Volatility and Professional Responsibility Ambiguity |
[DROPDOWN][DD] |
|
Client Uncertainty and Demand Variability |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add categories, color, context, clarifications, or caveats to the above:
ANNOTATION: This question focuses on friction. After identifying what’s expected, we now ask what’s preventing progress—what most constrains your ability to adopt or scale GenAI in legal service delivery.
You are asked to identify a single “primary obstacle”—the most immediate structural barrier your organization faces—and then to rate how serious the other listed challenges are. We recognize that every organization will face some combination of these issues, but the exercise of naming one as primary helps clarify what’s truly gating acceleration, not just complicating it.
We also recognize the irony: You’re unlikely to have a fully objective view of your own obstacles, much less their relative weight. Many of these factors overlap: Unclear ROI may be the surface expression of cultural resistance—i.e., the cultural resistance manifests as complaints about the lack of a clear business case. Client uncertainty may be the downstream effect of regulatory ambiguity. The categories are not meant to be neat or mutually exclusive; they’re designed to capture recurring patterns of friction that shape GenAI progress across the ecosystem.
GenAI Not There Yet. For some, the primary barrier is the technology itself—its current limits in accuracy or reliability within legal contexts. Even promising tools often fail to meet the profession’s threshold for confidence and defensibility. This can breed caution: the sense that GenAI is “almost but not yet ready” for high-stakes use. The resulting skepticism is rational, not regressive; it reflects the uniquely high reliability standards of legal work.
Search-to-Implementation Speed/Costs. Identifying, testing, and integrating tools takes time and money. Pilots are expensive, success rates are uneven, and lessons learned are hard to scale. Many organizations struggle to move from experimentation to operationalization because discovery cycles consume disproportionate bandwidth. For some, the obstacle is not so much resistance as it is simple capacity constraint: too many variables, too few hands.
Unclear ROI/Business Case and Budget Constraints. Even when enthusiasm is high, progress stalls without a credible economic case. Many leaders hesitate to fund initiatives that can’t demonstrate clear returns or client impact. This is particularly acute in professional services, where productivity gains don’t automatically translate into higher revenue. The result: Projects pause not for lack of belief but for lack of a budget-backed rationale.
Change Management and Cultural Resistance. Organizations don’t adopt technology; people do. Time pressure, entrenched habits, and fear of obsolescence can all slow adoption. Lawyers, in particular, are conditioned to limit downside risks, prizing precedent and precision over experimentation. “Change management” in this context is about alignment, not attitude: creating space, safety, and incentive for behavior change at scale.
Talent and Skills Gap. There’s a shortage of professionals who can bridge the gap between technology and practice. Knowing what GenAI can do is not the same as knowing how to make it work inside a matter, team, or workflow. The challenge is not just hiring technical talent—it’s cultivating applied literacy among lawyers, operations, and technologists alike.
Data Readiness and Quality. GenAI depends on what it can access and trust. In most legal organizations, knowledge is trapped across systems, formats, and minds. Even the best models underperform when data is fragmented, inconsistent, or poorly tagged. This obstacle is foundational: Until knowledge is structured and findable, GenAI remains limited to shallow use cases.
Legacy Systems and Integration Difficulties. Most providers operate within complex, interdependent IT ecosystems. Introducing GenAI often means confronting technical debt—aging document systems, bespoke matter-management software, or rigid security models. Integration hurdles make innovation feel like surgery: invasive, slow, and risky.
Information Security and Client Confidentiality. Legal work involves sensitive, privileged, and regulated information. Many organizations are constrained not by unwillingness but by duty. Even if tools are secure in principle, risk tolerance varies widely. Until model training, data isolation, and auditability mature, this remains a universal gating concern.
Regulatory Volatility and Professional Responsibility Ambiguity. Rules are evolving faster than clarity. Bar guidance, data residency laws, and AI governance frameworks differ by jurisdiction and change frequently. Providers are caught between undercompliance risk and overcompliance paralysis. The absence of consistent standards breeds confusion about what is “safe enough” and consternation about how quickly that can change.
Client Uncertainty and Demand Variability. Clients themselves are inconsistent: some encourage experimentation, others prohibit it, some both discourage and encourage simultaneously, and many haven’t said anything at all. For providers, this heterogeneity makes investment calculus difficult. Should you move fast to serve the early adopters or slow down to avoid alienating the skeptics? Client inconsistency becomes an obstacle; it undermines planning.
We recognize that these obstacles overlap. They’re not meant to be silos; they’re interdependent symptoms of a market still forming its norms. You’re not expected to have perfect clarity on what’s blocking progress, but you’re making decisions based on your best interpretation—and that’s what we’re capturing here.
Finally, if your main challenge isn’t reflected here—or if the interplay among them is more complicated—use the optional commentary box to elaborate, clarify, or add missing context.
Completely Optional Insights Section
Every question that follows is optional. The ask is to be selective—share where you have perspective, insight, or experience that others can benefit from. If a question sparks a strong answer, we’d love to hear it. If not, skip it.
This section goes beyond benchmarking. The inputs we gather here help sharpen our analysis, inform L.E.G.A.L.-related programming (including events), and surface themes worth discussing across the ecosystem. Your responses may also open the door to follow-up (e.g., opt-in case studies, working sessions, or speaking invitations). Absent your express permission, we will use these inputs only as paraphrased, non-attributed composite insights designed to prevent identification of any provider or individual. We will not quote you verbatim—named or unnamed—without separate, express written consent.
Our objective is to gather useful “color” while limiting burden. The richness of this initiative comes from diverse lived experience across the industry: what you’ve tried, what you’ve learned, and what you’ve seen work (or not). Your selective sharing helps form a clearer picture of where we are, where we’re headed, the barriers we face, and the misconceptions worth dispelling.
The survey is persistent. You may have nothing to add today, and that’s fine—you can always return later, whether prompted by a subsequent client request or your own readiness.
17. Strongest Signal. What do you consider the clearest indicator of your organization’s commitment to integrating GenAI into legal service delivery—the signal that most cuts through the noise to indicate your organization is engaged in more than just innovation theater?
ANNOTATION: This question asks you to identify the strongest single signal that your organization’s commitment to GenAI is tangible—not aspirational, not exploratory, but observable in practice. We’re asking what you consider the most unmistakable proof point that GenAI is no longer an experiment inside your organization but a recognized, resourced part of how you deliver, compete, or grow.
For some organizations, that signal might be financial: a ring-fenced budget, a permanent cost center, or a line item in the capital plan. For others, it might be organizational: a named leadership role, a steering committee, or a dedicated GenAI function with real decision-making authority. In some cases, it’s commercial: a client-facing product, an offering launched into the market, or a fee model designed around GenAI leverage. And for others still, it may be behavioral: incorporating GenAI into matter intake, training requirements, or client reporting.
There is no single right answer, because commitment manifests differently across business models and maturity levels. What matters is that your example feels dispositive to you—a concrete demonstration that GenAI is now part of how your organization allocates time, money, and leadership attention.
We recognize your signal may not align neatly with the categories already covered in this survey. That’s intentional. This question gives you space to capture a meaningful signal that doesn’t fit anywhere else.
18. Client Commercial Pressure Example(s). Please share an anonymized example of a client applying commercial pressure related to your integration—or lack thereof—of GenAI into legal service delivery. One illustrative example is sufficient, but if multiple come to mind, you are welcome to include them.
ANNOTATION: This question seeks a real-world illustration of how client expectations around GenAI are translating into commercial behavior—how the conversation is showing up in matters, budgets, or relationships.
We recognize that “pressure” can take many forms, some direct, others subtle. It might be a client explicitly conditioning a pitch, panel renewal, or pricing discussion on your GenAI capabilities—or, conversely, warning you against any GenAI use in their work. It might also surface through indirect signals: a request for proof of efficiency gains, a questionnaire about responsible use, or a procurement clause inserted “just to be safe.” These are all commercial pressures, because they shape how work is awarded, scoped, or priced.
Your example can be specific but anonymized—a single incident that captures the flavor of client behavior you’re seeing. It doesn’t have to be dramatic. Even quiet or ambiguous signals matter: A client asking “what’s your AI policy?” for the first time can be as telling as one demanding an AI-driven discount.
We understand that you may not always know whether a client’s actions were truly driven by GenAI concerns. That uncertainty itself is part of the story. The purpose here is to surface how you interpret and experience client behavior, not to prove causation. The broader survey will compare these anecdotal patterns to the Client Survey to identify misalignments and blind spots.
You are also welcome to include more than one example—especially if they illustrate contrasting dynamics (for example, one client rewarding GenAI integration, another penalizing it). However, a single, well-chosen example is perfectly adequate. The intent is depth of insight, not quantity of anecdotes.
Think about moments where GenAI became a factor in:
- Provider selection – e.g., being asked about your GenAI capabilities in an RFP or panel review.
- Pricing or fee discussions – e.g., a client pressing for discounts based on perceived GenAI efficiency.
- Scope or workflow negotiations – e.g., a client insisting that certain tasks must or must not involve GenAI tools.
- Contract terms – e.g., new clauses around data use, model training, or disclosure of AI use.
- Performance expectations – e.g., clients asking how GenAI has improved turnaround or quality.
Each of these reflects commercial pressure in different forms—demanding, defensive, or curious.
19. Client Misconception and Perspective Shift. What is the most consequential misconception clients commonly harbor about the use of GenAI in legal service delivery? If more clients could understand one thing better about working with providers like you on GenAI integration, what would make the biggest difference in furthering both sides’ best interests?
ANNOTATION: This question invites you to surface the gap between perception and reality—the places where clients’ assumptions about GenAI meaningfully diverge from your actual practices, capabilities, or constraints. Every new technology creates misunderstandings; GenAI’s pace and opacity have made those misunderstandings unusually pronounced.
We’re interested in the misconception that does the most commercial, relational, or strategic damage—the one that, if corrected, would most improve how clients and providers collaborate. It may involve overestimation (“GenAI already automates legal reasoning”), underestimation (“GenAI can’t safely touch any legal work”), or misplacement (“Efficiency gains automatically translate to price reductions”).
Your response can focus on a single recurring pattern, such as:
- Clients assuming GenAI replaces professional judgment rather than augments it
- Clients believing responsible use is equivalent to prohibition
- Clients expecting immediate cost savings without accounting for investment, oversight, and integration costs
- Clients assuming all GenAI tools share the same data-handling risks
- Clients failing to recognize the difference between experimentation and production-grade deployment
The goal is not to vent frustration but to clarify where misalignment lives—what you wish clients grasped about the realities of integrating GenAI into high-stakes, regulated professional work. These insights help the broader market understand where education, transparency, or new norms are most needed.
We recognize that even identifying the “biggest misconception” requires interpretation. You won’t have a survey of client beliefs, but you do have repeated experiences—questions, objections, and conversations that reveal consistent misunderstandings. That pattern is the evidence we’re after.
Equally, the “perspective shift” portion asks what clients could see differently. What understanding would move the relationship forward? For example, that meaningful adoption requires iteration, not instant transformation; that responsible use involves governance, not abstinence; or that GenAI’s value often lies in quality, speed, and risk mitigation—not just reduced cost.
Think of this as an opportunity to reframe the narrative. Your response helps identify where communication, expectation management, and market education can make the most difference in bridging the provider–client GenAI gap.
20. Client Mixed Message(s). Please share an anonymized example of a client communicating inconsistent or conflicting messages about the use of GenAI—signals that created challenges or slowed adoption (e.g., encouraging innovation but prohibiting actual use; requesting proof of efficiency while restricting the tools that deliver it). One example is sufficient; if others come to mind, you’re welcome to include them.
ANNOTATION: This question looks at contradictions providers are encountering when interfacing with clients around GenAI.
Where Question 19 asked about misperception, this one explores inconsistency—the moments when clients’ stated positions, procurement behaviors, or internal communications don’t line up.
Every market in transition produces mixed messages. The legal market is no exception: One department demands speed and savings while another demands caution and control. Clients may ask for evidence of GenAI efficiency in proposals but later reject any GenAI use in engagement terms. They may talk about innovation as a differentiator but still buy on familiarity or rate.
We’re asking you to describe the contradictions that matter most—the ones that complicate investment, planning, or relationship management. Examples might include:
- A client’s legal ops group pushing for automation while its security team blocks every GenAI workflow
- A general counsel praising innovation in public but privately warning providers not to “experiment” on their work
- Clients insisting on transparency about GenAI use but declining to share their own policies or risk criteria.
These are not failures of honesty; they’re symptoms of an unsettled market still negotiating new norms. The aim is to capture how you interpret and navigate those contradictions—how you decide which messages to prioritize and which to discount when making real business choices under uncertainty.
You are not expected to have documented every instance of inconsistency; few providers could. We’re looking for the pattern you perceive from repeated conversations and behavior—your lived sense of how clients’ internal misalignment manifests commercially.
Think of this as another diagnostic of the GenAI transition: the distance between clients’ aspirations, anxieties, and actual conduct. Your examples will help reveal where alignment efforts, education, or clearer boundaries are most needed.
Provide a brief, anonymized example—or several if they illustrate distinct forms of contradiction. Focus on what those mixed messages mean for you: how they affect strategy, communication, or investment decisions.
21. Client Use Case Policies. In keeping with the theme of mixed messages, and to help elucidate just how confounding the current operating environment can be for providers, we would ask you, as best you can, to distribute your clients by category below regarding their communicated policy on GenAI for specific use cases. Rely on whatever manageable sample (e.g., 100 largest clients) provides a directionally useful approximation.
Dropdown Options
None = 0%
Few = 1–10%
Some = 11–25%
Many = 26–50%
Majority = 51–75%
Most = 76–100%
Public Tools, Nonconfidential Work (public tools = Gemini, ChatGPT, Claude, Perplexity, etc.)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Public Tools, Confidential Work (public tools = Gemini, ChatGPT, Claude, Perplexity, etc.)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Private Tools, Nonconfidential Work (private tools = ChatGPT for Enterprise, Legora, Harvey, LiteraOne, etc.)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Private Tools, Confidential Work (private tools = ChatGPT for Enterprise, Legora, Harvey, LiteraOne, etc.)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Use Client Confidential Info to Train Our Models Only for Client’s Own Work (client confidential data used to fine-tune/train models only applied to that client’s own matters)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Use Client Confidential Info to Train Our Models for General Work (client confidential data used to fine-tune/train models; use not restricted to client’s own matters but rather applied generally across matters for multiple clients)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
Use GenAI for Legal Research (e.g., to produce full research outputs like memos; not ambient AI that improves search, etc.)
|
Share of Clients |
|
|---|---|
|
Silence |
[DROPDOWN][DD] |
|
Prohibit |
[DROPDOWN][DD] |
|
Consent Required |
[DROPDOWN][DD] |
|
Notice Required |
[DROPDOWN][DD] |
|
Permit |
[DROPDOWN][DD] |
|
Encourage |
[DROPDOWN][DD] |
|
Incoherence |
[DROPDOWN][DD] |
|
Nuance |
[DROPDOWN][DD] |
[optional] Commentary. You are welcome to add color, context, clarifications, or caveats to the above:
ANNOTATION: This question continues the theme of mixed messages by asking how your clients have communicated—or not—policy positions on specific GenAI use cases. It recognizes that providers now operate in an environment where most clients are silent and those that have communicated have taken very different positions—including some positions that are hard to parse or are simply internally inconsistent.
We do not expect you to know every client’s policy. In fact, the absence of information is part of the insight. Silence will likely dominate—many clients have not articulated a formal position, even as they ask questions or express concern. Among those who have, policies span the full spectrum, from outright prohibitions to enthusiastic encouragement, with every imaginable shade of “maybe” in between. Your task is simply to approximate that distribution. The goal is coherence, not precision. Your share ranges should plausibly sum to 100%, reflecting a pattern that makes directional sense, not mathematical exactitude.
The six primary categories (Silence → Encourage) map the policy continuum from no position at all to active advocacy. Most providers will find their clients spread across multiple categories; the pattern itself is revealing. Together, these responses should illustrate how uneven, and at times incoherent, the market still is.
The two edge categories—“Incoherence” and “Nuance”—capture the fuzziness of the operating reality.
Incoherence applies where client direction exists but is hard to decipher: vague wording, conflicting communications from different departments, or contract terms that appear internally inconsistent. You understand what’s been said but not what’s expected, making compliance uncertain or uncomfortable.
Nuance applies where the policy is intelligible but hyper-specific—filled with exceptions, caveats, or context-dependent rules that defy simple classification. These clients are coherent but complex: Their requirements are precise enough to be clear and yet so tailored that they resist any standard label.
These two categories help differentiate between confusion (Incoherence) and complexity (Nuance). Both are analytically valuable. Incoherence highlights where the market lacks shared language. Nuance shows where sophistication has outpaced standardization.
The broader point is that providers must make real decisions under pervasive uncertainty. For most client relationships, you are inferring expectations rather than following explicit policy. Silence dominates, and where guidance exists, it’s fragmented. That combination—little information and no clear pattern—creates a confounding operating environment. Your selections here map that uncertainty: They show how providers interpret a world in which compliance often requires reading between the lines.
If you have more to add, you are welcome to take advantage of the optional commentary box.
22. Pivotal Lesson. What is the single most valuable insight or lesson your organization has learned so far from GenAI adoption—whether strategic, cultural, technical, or commercial?
ANNOTATION: This question invites reflection rather than reporting. We’re asking for the one lesson—the insight that has most reshaped how your organization thinks about, invests in, or approaches GenAI in legal service delivery.
Organizations often reach a moment of clarity as GenAI moves from concept to lived practice. The lesson may have emerged from planning, experimentation, execution, or client interaction. What matters is that it changed your understanding of what meaningful adoption truly demands.
Lessons take many forms. Some are strategic, revealing that scale follows structure, not enthusiasm. Others are cultural, showing that change rarely happens by persuading minds first and altering behavior later—it’s usually the reverse. Many are commercial, learned through the tension between how clients say they will reward innovation and how they actually respond once it’s real.
These are truths earned through experience—realizations forged in practice, not theory. We’re not seeking tidy conclusions about success or failure. The most instructive insights are often still in motion. Unresolved lessons are part of the learning story. Even brief responses carry weight. Collectively, they help chart the profession’s learning journey: how providers are turning experimentation into understanding, and understanding into sustainable change.
23. Looking Ahead. What potential GenAI capability or functionality—not yet available today—would most significantly improve your ability to deliver legal services?
ANNOTATION: This question looks forward from what you’ve learned to what you now hope for. We’re asking not for predictions but for informed imagination: What capability, if it existed, would materially change your ability to deliver value to clients?
The intent is to surface directional insight: what providers actually need next for GenAI to become transformative rather than incremental. That could be technical (applications that reliably handle privileged data), structural (seamless workflow integration across systems), or conceptual (explainability robust enough to satisfy client expectations). Whatever the form, we’re interested in the frontier you now recognize after what you’ve experienced so far.
Your response can be aspirational, but it should still be grounded in the realities you’ve already encountered. Truths earned through experience—what you’ve discovered about GenAI’s current limits—are often the best guide to what would unlock its next stage of value. Consider what would remove a constraint, close a gap, or convert curiosity into capability.
Even if your answer borders on speculative, it tells us something about your direction of travel—what kinds of capabilities you believe would make GenAI genuinely consequential for professional work.
This question closes the loop begun with the Pivotal Lesson. Where that question captured learning already earned, this one captures learning projected forward—how experience informs aspiration. Together, they describe both sides of the learning journey: what you now understand, and what that understanding tells you to want next.
24. Anything Else? This optional catch-all question leaves space for (but does not require) information, observations, or opinions not elicited above.
ANNOTATION: This question provides open space for input that doesn’t neatly fit into earlier categories but may be valuable to the broader dialogue. It’s entirely optional. Even brief observations can highlight areas meriting deeper exploration in the future.
25. Suggestions. What recommendations do you have to improve this survey?
ANNOTATION: This closing question is aimed at refining the instrument itself. We are asking for your candid input on how this survey could be clearer, more efficient, or more valuable.
Examples of useful feedback might include:
- Wording changes to reduce ambiguity or friction
- Adjustments to response formats (e.g., ranges, dropdowns, free text)
- Additions or deletions of topics to better capture reality
We are committed to serving the entire ecosystem. The goal is to advance the collective conversation in ways that are maximally useful and minimally burdensome. We recognize the tension between those two aims, and we welcome input on how best to resolve it.
There is no pride of authorship here. This survey is not fixed. Evolution is necessary and welcome. The healthiest evolution will be responsive to the candid feedback of those most invested in the outcome. Your suggestions directly shape how this effort improves, grows, and continues to deliver value for all participants. Thank you!
L.E.G.A.L. Provider Acknowledgment
L.E.G.A.L. (Leaders Exploring Generative AI in Law) is a permissioned intelligence system designed by LexFusion Intelligence, an arm of Baretz+Brunelle LLC. This Acknowledgment applies to your submission of responses to the L.E.G.A.L. Provider Survey. The full L.E.G.A.L. Nondisclosure Policy is available here.
By authorizing release of responses to [Requesting Client], you acknowledge and agree to the following.
1. Purpose and Design
L.E.G.A.L. is a standardized, reusable survey system designed to reduce duplicative client questionnaires while enabling longitudinal, behavior-grounded market intelligence and benchmarking.
2. Persistent Responses
Responses submitted through the Provider Survey are treated as a persistent baseline to support longitudinal analysis and reduced respondent burden. You may update responses and permissions over time as described below.
3. How Responses Are Used and Shared
L.E.G.A.L. uses responses in the following ways, with different visibility rules and controls:
A. Client-specific outputs for this Acknowledgment)
This Acknowledgment is specific to [Requesting Client]. By proceeding, you authorize LexFusion Intelligence to provide [Requesting Client] with the following client-specific outputs:
i. Client-facing extract (Questions 1–5 only)
Questions 1–5 are designated as client-facing. If you authorize release, your individual responses to Questions 1–5 will be released to [Requesting Client] as if [Requesting Client] had conducted the survey directly. You may submit partial responses and may decline to answer any question. Receiving clients will be able to observe missing responses for Questions 1–5.
ii. Client-specific benchmark report (Questions 1–16)
Separately and in addition, Provider Survey responses may be used to produce a client-specific benchmark report for [Requesting Client]:
- Questions 1–5: If you authorize release, your responses to Questions 1–5 may be reflected in provider-attributed form (including comparative views) within the client-specific benchmark report, consistent with the client-facing designation of Questions 1–5.
- Questions 6–16: Individual provider responses are not disclosed to [Requesting Client] in an identifiable, provider-attributable way. These questions may be reflected only in de-identified and/or aggregated benchmarking outputs. Where a minimum threshold of a 20-provider dataset is met for a given question/segment, the report may include de-identified visualizations (e.g., dot plots) in which individual provider responses may be reflected as unlabeled points that are not attributable to any identified provider.
Participation and dataset completeness transparency (Questions 1–16). Client-specific reporting may provide transparency at two levels:
- Panel coverage list: A list of firms included in [Requesting Client]’s report (i.e., providers that have submitted responses and authorized release to [Requesting Client] under a Provider Acknowledgment), and a list of providers requested by [Requesting Client] but not included (e.g., did not submit and/or did not authorize release to [Requesting Client]).
- Question-level completeness: While you may decline to answer any individual question, specific benchmarks may identify which included providers are not included in that benchmark because they did not answer the relevant question(s). Identifying that a provider did not answer a question is different from disclosing how that provider answered.
Minimum thresholds. L.E.G.A.L. does not present any client-specific segment (including averages) unless at least five providers are included for that question/segment. De-identified dot plots/distributions are shown only when at least 20 providers are included for that question/segment. Below these thresholds, the attendant benchmarks and visualizations are not provided.
iii. Optional questions (Questions 17–24): Cross-client observations and qualitative context; masked use only
Questions 17–24 are entirely optional and seek contextual inputs. Where L.E.G.A.L. references themes drawn from Questions 17–24 in client-specific reporting, it will do so only as fully masked, non-attributed composite insights—paraphrased and synthesized in a manner intended to prevent attribution to any specific provider and to avoid implying any observation is specific to [Requesting Client]. In other words, these are paraphrased observations providers are making across their client bases, not “what your firms are saying about you.” Unlike with Questions 1–16, clients will not be informed which firms chose to answer, or not answer, any of the optional Questions 17–24.
L.E.G.A.L. will not share responses to these optional questions verbatim (even if de-identified) unless we first obtain your express permission, separately and in writing.
Client-specific withdrawal of authorization for [Requesting Client]
You may withdraw this Acknowledgment at any time. Withdrawal applies prospectively, meaning unless and until sharing is re-authorized:
- [Requesting Client] will no longer receive client-facing extracts of your responses to Questions 1–5.
- Your responses to Questions 1–16 will no longer be included in any client-specific benchmarking, segmentations, and de-identified visualizations provided to [Requesting Client].
- Your responses to Questions 17–24 will not be used in future client-specific reporting for [Requesting Client], whether paraphrased/synthesized or otherwise.
Withdrawal does not retract any client-facing extract reporting delivered to [Requesting Client] while this Acknowledgment was active, and it is specific to [Requesting Client]—it does not affect withdrawal from the L.E.G.A.L. program as a whole.
B. Program-wide composite benchmarking and reporting
A core objective of L.E.G.A.L. is to establish an industry-wide shared point of reference: market-level reporting that both you and clients, like [Requesting Client], can rely on to ground informed, fact-based dialogue. All responses (including responses to Questions 1–5 in de-identified form, and open-text responses in paraphrased/synthesized form) may therefore be used on a program-wide basis for de-identified and/or aggregated benchmarking and longitudinal analysis, including the composite market report shared with all participants, including you.
No participant (whether included or excluded), including you, is identified in the composite market report unless a participant provides express permission through a separate, written consent process.
If you wish to withdraw from program-wide composite benchmarking and reporting use, you may request program-level withdrawal by emailing LFIntel@baretzbrunelle.com. Such withdrawal applies prospectively and does not affect benchmarking already produced or delivered.
4. Client-Requested Fresh Release and Notice
From time to time, participating clients like [Requesting Client] may request a fresh release of provider responses. If a fresh release requested by [Requesting Client] includes your data, you will receive advance notice and have an opportunity, but no obligation, to update responses before delivery or to withdraw this Acknowledgment. If you take no action, the then-current submitted responses and authorization state on file will govern what is released to [Requesting Client] at the close of the response period.
5. Submission, Saving, Collaboration, and Client Authorization (Submit ≠ Release)
Responses must be submitted to be saved in the system and to enable collaboration.
Submission (saving). Clicking “Submit” saves the current state of your responses and makes them available for continued work, including by collaborators. You may submit multiple times as you refine your answers. Submitting does not, by itself, release any responses to any client.
Client authorization (release). Client visibility is controlled separately through the client-specific authorization checkbox below. Authorization applies to all responses—there is no question-by-question authorization. Authorization does not submit responses, and submission does not authorize release. Release is authorized only if the authorization box is checked. Only if the authorization box is checked may your submitted responses be released to [Requesting Client] after the response period closes. Client-specific authorization may be withdrawn at any time by unchecking the box.
6. Contact Information
- Contact information (including collaborator emails) entered in the survey will be retained solely for program administration, including notices related to client-requested fresh releases. Contact information will not be shared with clients or other third parties but may be used to facilitate coordination within your organization (e.g., routing subsequent registrations to your organization’s established Primary Point of Contact), consistent with the original business purpose for which contact information was provided.
- Any client-supplied contact information will be deleted following the close of the response period and related follow-up.