Beware of Your AI Queries; They May Not Be Protected
When you are a party in a legal matter, sometimes the things you type into an LLM are protected and sometimes they are not. Sometimes, if you share privileged things with your AI-companion, those things are no longer privileged. It depends on what you shared, who you are, and why you shared it. In this article, the author discusses three recent cases that discuss what is discoverable.
Three recent cases have addressed the use of large language models (LLM or colloquially referred to as “AI”) in litigation: Warner v. Gilbarco, United States v. Heppner, and Morgan v. V2X.[1] The first and last are civil cases with pro se parties but Heppner is a criminal case with a represented defendant.
The short of these cases is that when you are a party to the case, sometimes the things you type into an LLM are protected and sometimes they are not. Sometimes, if you share privileged things with your AI-companion, those things are no longer privileged. It depends on what you shared, who you are, and why you shared it. For instance, in Gilbarco, a pro se party used AI and the other side tried to compel all the ways it was used (inputs, conversations, outputs, etc.). The court applied Rule 26 using basic meanings of the words, “prepared in anticipation of litigation for trial by or for another party”. This case is a straightforward application. The pro se party did not have to turn over a thing because of how the tool was used and because you do not need a lawyer to make a work product.[2] Oh, and since AI is not a “person”, LLM does not a third-party make.
Yet, anyway and according to the court, that is not a determination based on any measure or standard for human-ness, it is a pronouncement for the purpose of determining the nature of with whom or what a thing was shared and the pro-se-ness matters. Had the pro se party engaged counsel and disclosed information, the ruling may have been different as it relates to attorney-client privilege. However, to waive work-product, the pro se party would have had to first piss off their LLM and rise to the higher threshold of likely disclosure to an adversary. The request obviously sought the pro se party’s mental impressions and part of that determination rests with the lack of counsel because a pro se party is both party and advocate. Mental impressions are not a reasonable thing to seek, although, the requesting party did try and rely upon a random content marketing article as their basis to do so. Gross.
In Heppner, the opposite finding was made by applying the same principles to discern how a tool was used and whether a privileged relationship or work product protection applied. In Heppner, a party threw his likelihood of confidentiality out the window probably by using a free version of an LLM that states a longer version of, “what you type in here will be sent to the cops” in their terms of service and, critically, he was represented by counsel (the opposite of pro se). In doing so, he transferred information to a non-confidential system and overcame the threshold of likely disclosure to an adversary. In that case, the party did have to disclose their use of LLM tools (inputs, conversations, outputs, etc.).
The decision in Archie Morgan v. V2X, Inc. is the most interesting because it offers a detailed treatment of the use of LLMs under Federal Rule of Civil Procedure 26; this one is less about whether the inputs, conversations, and outputs are discoverable and more about whether use of the tool must be disclosed. This dispute is between a corporate defendant and yet another pro se plaintiff, but the court’s reasoning may extend to expert witnesses employing AI‑assisted tools for analysis, modeling, or report preparation. Morgan again shows that the application of principles is the function of the court and the generation of hard and fast rules is not. In it, the same principle was applied to assess how a tool was used. Rather than immediately labeling tool use as discoverable or not, one party asserted a concern over the failed handling of confidential matters. As a result, the court revised protective orders to restrict the use of certain tools.
Protective orders are not simply redundant confidentiality labels.[3] In this case, protective orders serve as a gateway to restricting the use of records as input into LLMs with certain characteristics (uses for training, shares, etc.). Despite the fact that Rule 26 applies to materials prepared “by or for a party” in anticipation of litigation, a distinction appears to have been made in Morgan that long-standing work product doctrines protect the substance of those materials, but not necessarily the tools used to assess or review them.[4] A litigation practitioner cannot assume that tools they use are protected in the same manner as materials input into those tools. This is because disclosure of any one tool or another is not likely to inherently reveal strategy and if there is an indication of misuse, that may open the door to discoverability. The use of LLMs should not be treated as a collaborative effort in that the choice of tools may be discoverable, but not in an unlimited scope.
This position is consistent with the likening of various LLMs to other tools such as word processing software or pipettes. For instance, identification of Professional Write or the brand of pipette reveals nothing about the quality of work, the methods applied, the condemning finding, the exculpatory finding, or anything else. Guarding the tools you use as though they are a closely held secret is dumb unless you cannot explain how they work and cannot explain how an output was derived from your inputs (in which case, it is not the guarding of the tool that is the dumb thing). If that is why the tools are kept secret, you have already lost. The statement, “I used a super secret/fancy/expensive/complex/cheap/simple tool,” is simply not revealing and unlikely to be protected. You must disclose enough information to assess exposure of confidential information. The lesson appears to be if you want to know what tools were used by the other side, make a claim that your confidential disclosures were input into such a tool by the opposition, and therefore you simply must know which tools were used.[5]
Inputting things into an LLM may breach protective orders depending on their language. If you are crafting protective orders, do not leave this open to interpretation. Include language that directly addresses the use of LLMs and other such data-grabby software so there is no question as to intent. Outside of a specifically worded protective order, uploading your entire case file into BelligerentWhaleAIv.42 does not automatically waive protection (just like e-mailing it might not), but a protective order is an independent constraint and not following it can result in mishandling. Frequently, parties agree to discovery levels and outline which types of information shall be disclosed as well as how it should be handled. Protective orders are independent of Rule 26; this means that you can retain work product protection but also violate a protective order.
In summary, Warner tells us the use matters and AI-assisted reasoning, drafting, and internal analysis are protected opinion work product, but that litigant was pro se. Heppner tells us that a represented litigant using a tool with a clearly termed waiver of confidentiality, detached from his own counsel, is not afforded similar protection. There is no isolation of variables here. Morgan is unconcerned with work product status in and of itself but is concerned with the use of the tool and whether that use exposes otherwise confidential data. BelligerentWhaleAI.v42 does not have to be put out to digital pasture yet; she still has protectable utility and her identity may be discoverable.
As was true in my article about data protection a decade ago, review the terms of your software agreements to ensure they comply with your own data protection policies, including protective orders. This includes the use of BelligerentWhaleAIv.42, available for free at belligerentwhale.com.
Also, be aware that just because you pay for something does not mean it is not “consumer-grade”, is not using data for training or input, is not sharing data with third parties, and the like. Many people pay for a non-professional license (sometimes called “Home” or “Family” or “Student”) for their operating systems or office software suites; these licenses may not have the same enterprise data protection policies you presume they have simply because they are not zero cost. Review them.
Be prepared to answer whether LLMs or other software were used, how it was used, which versions of which platforms, which data categories, and under which contractual safeguards.[6] Likewise, prepare to explain how you separate analysis from basic infrastructure and to reiterate that professional judgement drove opinions, not the aptly named BelligerentWhaleAI.v42.
[1] Sohyon Warner vs. Gilbarco, Inc & Vontier Corporation, Case No. 2:24-CV-12333, February 10, 2026, in the Eastern District of Michigan, Southern Division; United States vs. Anthony Heppner, Case No.
[2] There is a difference between attorney-client privilege and work product doctrine.
[3] The redundance is likely contained in your engagement letter with the client and if it is not, perhaps reconsider your engagement letter provisions so that confidentiality and segmentation are addressed.
[4] In this sense, materials includes mental impressions, opinions, and legal or strategic thinking whether or not representative counsel has been retained.
[5] This is definitely not the lesson.
[6] Unless the answer to the first question is a resounding no and you delivered a beautifully handwritten report complete with calligraphy flourishes.
Dorothy Haraminac, MBA, MAFF, CFE, PI, provides financial forensics, digital forensics, and blockchain forensics under YBR Consulting Services, LLC, and teaches software engineering and digital forensics at Houston Christian University. Ms. Haraminac is one of the first court-qualified testifying experts on cryptocurrency tracing in the United States and provides pro bono assistance to victims of cryptocurrency investment scams to gather and summarize evidence needed to report to law enforcement, regulators, and other parties. If you or someone you know has been victimized in an investment scam, report it to local, state, and federal law enforcement as well as federal agencies such as the FTC, the FCC, and the IRS.
Ms. Haraminac can be contacted at (346) 400-6554 or by e-mail to admin@ybr.solutions.



