Impeachment, typically with documents, is the single most important tool for cross examining...
Inversion
An important question in developing LLM-compatible software in legal (and elsewhere) is whether the control model is "direct" or "inverted." In short: Do I ask my DMS to summarize, and it asks ChatGPT (direct), or do I ask ChatGPT and it pulls the document from the DMS (indirect)?
From what I've seen, most LegalTech products are doing the former. I think there are a few reasons why, but I'm not sure they always hold up.
๐ฃ๐ฟ๐ผ๐บ๐ฝ๐ ๐๐ป๐ด๐ถ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐๐ ๐๐ฎ๐ฟ๐ฑ: By using direct control, the LegalTech product takes control of the prompting, including not just a single prompt but the ability to construct chains of prompt-response-prompt-response (so-called "agentic" workflows). In the early days of LLMs that made sense. There was an art to prompt-engineering, and "agentic" workflows required additional programming. That's either entirely not true today or vanishing. Users have gotten more experienced at prompting for one thing. But more importantly, modern LLMs are very good at understanding any prompt you give them, and they all now have the ability to utilize the data they can access to construct agentic workflows that involve going back and forth with the data. More than that, I'd argue that in many situationsโespecially given the trillions of dollars being spent on these modelsโAI companies are going to be BETTER at these things than a LegalTech company.
๐ฆ๐ฒ๐ฐ๐๐ฟ๐ถ๐๐: The argument here is "don't give an LLM access to your data." But look, that's true in both scenarios. No one has a competitive "closed loop" LLM; everyone is sending data to first-party LLMs; and everyone, regardless of the control model, has to deal with the security implications of that, which are real. In both cases you are going to be relying on your LegalTech vendor to have constructed the proper security guardrails. In the direct control scenario, they have to protect against prompt injection and similar AI attacks. In the inverted control scenario, they have to ensure proper authorization and scope. And here too, you could make a good argument that AI companies are BETTER at dealing with AI-specific security concerns, whereas LegalTech companies are better at protecting access to data.
๐๐'๐ ๐ ๐ ๐๐ฎ๐๐ฎ: I think a lot of companies don't want to allow LLMs to connect to their software, not because of security but because they want to "leverage" the customer information they have to sell an LLM product. I don't want ChatGPT to access the documents I store; I'd rather sell you my AI summarization service. I don't want Claude to have access to my caselaw database; I want to sell you my AI research product. OK, but the question legal CONSUMERS should care about is "is that better"โis your AI "wrapper" better than what would happen if you let ChatGPT connect, or are you just bundling.
To be clear, I'm not arguing the inverted control model is always right. My point is itโs a valid model and one that I think shouldโand willโgain more currency in LegalTech.