Artificial Intelligence or Artificial Interference?
How AI is Reshaping Litigation for Better and Worse
AI remains a flawed practice companion. In addition to the possibility of hallucinated case citations and incorrect legal analysis, the use of AI introduces data privacy concerns and risks misadvising individuals due to overly generalized AI conclusions. In this article, the author addresses the following: Does the use of AI in litigation represent true artificial intelligence, or is it artificial interference preventing a just outcome?
Artificial intelligence (AI) is rapidly transforming the modern-day legal landscape, offering tools for research, drafting, fact development, document review, expert services, jury selection, and even predicting case outcomes. If used correctly and with robust safeguards, some attorneys, experts, and litigants may benefit from the use of AI to streamline litigation projects. However, AI also can pose serious risks and dangerous pitfalls when misused or used without adequate quality control measures in place. Recently, for instance, dozens of attorneys have faced sanctions in state and federal courts for filing briefs containing fake (“hallucinated”) case citations or incorrect statements of the law generated by AI. And the Government Accountability Office (GAO) has scolded protesters about the use of AI for drafting bid protests for similar reasons. Outside the parameters of legal research and drafting, litigants are now using AI for more novel and potentially even more troubling purposes. One such example is a recent Arizona criminal case where the family of a crime victim offered an AI-generated victim impact statement for use at sentencing. The use of AI to provide a witness statement raises serious concerns for the accuracy of the information provided and the fairness of the proceedings, begging the question: Does the use of AI in litigation represent true artificial intelligence, or is it artificial interference preventing a just outcome?
Recent Applications
Today, AI is commonly used by lawyers and litigants to streamline otherwise time-consuming legal tasks like legal research, brief writing, and synthesizing or summarizing voluminous document productions. In addition, AI tools now are being integrated into client-facing interfaces such as chatbots for legal intake. Expert witnesses also may use AI to assist with fraud detection, financial analytics, and forecasting. In theory, these tools can help assess the merits of potential cases and streamline client onboarding, and potentially reduce litigation costs over time. However, as noted above, AI remains a flawed practice companion. In addition to the possibility of hallucinated case citations and incorrect legal analysis, the use of AI introduces data privacy concerns and risks misadvising individuals due to overly generalized AI conclusions. And, where AI-generated or -enhanced evidence forms all or part of the foundation for a retained expert’s opinions, there is a substantial risk of exclusion of that expert’s opinions. Ensuring human oversight in these interactions remains critical to maintaining legal integrity.
Earlier this year, an Arizona criminal case may have shifted the legal-AI landscape dramatically. In 2021, in Chandler, Arizona, Christopher Pelkey was shot by Gabriel Paul Horcasitas during a road rage incident. Horcasitas was eventually convicted for the killing. At sentencing, crime victims (or their families, as may be appropriate) generally are entitled to give victim impact statements, i.e., written or oral statements describing how the crime affected their life, which are submitted to the judge to consider during sentencing. For Horcasitas’s sentencing in May 2025, Pelkey’s sister prepared and played for the sentencing judge an AI video which depicted her deceased brother speaking to the camera as if he were offering his own words. To create the video, she used AI programs to combine photographs, videos, and audio clips. She altered portions of his image, such as removing his sunglasses and trimming his beard, and she recreated his laugh. The resulting image of her brother recited a script that she wrote. Experts believe the case represented the first instance where an AI-generated video of the victim was used for purposes of a victim impact statement.
The judge commented on his appreciation for the video, then sentenced Horcasitas to 10.5 years in prison for manslaughter. Although the defense attorney does not appear to have objected to the use of the video at the sentencing hearing (possibly dooming any appeal), questions remain as to whether the video was an appropriate victim impact statement and fair to the defendant. As noted, the AI video was not actually the victim himself; it was an approximation, bearing an altered image and a statement written by someone else. Would the victim have given the statement attributed to him? Would he have come across as credible, likable, and admirable as the video made him out to be?
Victim impact statements are not formal evidence, and they are submitted to a judge, not a jury. Therefore, the risk of the ultimate decision-maker giving undue weight to a statement manufactured through AI is somewhat lessened. That said, if AI can be used for victim impact statements—to create or approximate facts, to manipulate emotion, or to drive outcomes—it could open the door to risks of undue influence and unfairness.
Potential Future Applications
If an AI-generated video can be used for a victim impact statement, it is no great leap to expect attorneys will attempt to use AI to assist in similar contexts, if a court allows it. For instance, a litigant could offer an AI-generated video of a witness’s deposition testimony. Under existing rules of evidence, most states allow deposition transcripts of opposing parties to be read into the record without that party testifying live. In some circumstances, third-party witness testimony can be read into the record when that witness is unavailable to testify. But AI-generated video or audio, complete with synthesized voice, tone, and body language, adds a new layer of complexity and risk. Jurors and judges often assess credibility based not just on words, but on a witness’s demeanor and delivery. An AI-generated version might convey emotion or nuance that the real witness never expressed, thereby changing the perceived truthfulness or weight of testimony. This could tip the scales in close cases, threatening the overall fairness of proceedings.
In another trial, a litigant may attempt to use AI-enhanced or -generated versions of evidence to provide a clearer picture of that party’s story of the facts. In a Seattle-based trial, for instance, a criminal defendant attempted to offer an AI-enhanced version of a smartphone video as evidence, claiming the original video was low resolution and blurry, whereas the AI video offered a “more attractive product for a user.” The court ultimately denied admission of the video because AI enhancement is not seen as sufficiently reliable in the relevant scientific community.
In another case, an expert witness may use AI-generated or -enhanced evidence to summarize voluminous financial records, support their expert opinion by generating calculations or cross-checking calculations against available data, or even to draft an expert report or declaration. In January 2025, a Minnesota federal court struck an expert opinion offered via the declaration of (ironically) an AI expert, testifying on the dangers of AI deepfakes because the declaration itself included AI-hallucinated citations. In a separate matter, a New York judge found an expert’s opinion speculative and incredible after the expert used an AI tool to assist with a damages estimate. Ultimately, the judge concluded that the expert (and offering party) could not establish that the AI tool was reliable or generally accepted in the field of expertise.
However, over time, that may change. Inevitably, AI technology will improve to a point where it generally may be considered reliable by industry experts. When that happens, AI enhancement will be susceptible to the same risks as AI-generated witness testimony. Are the facts accurately depicted in an AI-generated or -enhanced video? Or are they manipulated and colored by a litigant’s self-serving narrative of the facts or an attorney’s “spin?” Therein lies the risk of allowing AI-generated witness testimony or AI-enhanced evidence in litigation. The ability to use AI to manipulate information to enhance a litigant’s storytelling or create evidence that does not actually exist—or is otherwise unreliable—moves across the line from artificial intelligence to artificial interference with the opponent’s right to a fair trial.
Key Takeaways for Litigants and Expert Witnesses
- Use AI at your own risk. AI remains a very new technology. While AI tools may, in some circumstances, streamline time-consuming research, writing, or discovery projects or allow individuals to organize their thoughts in a coherent way, many AI tools are unreliable and cannot be trusted to provide accurate information, case citations, points of law, or legal or expert analysis. Lawyers using AI to research or draft submissions to clients, courts, arbitrators, or other tribunals must double-check all AI-generated work product to ensure accuracy and compliance with ethical requirements. Unrepresented litigants, whether in state or federal courts, arbitration, or before tribunals such as GAO, should be extremely wary of the use of AI as well. Unrepresented litigants are not immune from monetary sanctions or the dismissal of their cases as a result of the improper use of AI tools. Ultimately, the loss of a case due to the improper use of AI could cost a litigant far more than the attorneys’ fees saved by using AI as a shortcut. Meanwhile, represented litigants should request that their attorneys disclose the use of AI tools in litigation. The improper use of AI can lead to significant penalties, and litigants should know those risks when engaging counsel.
- Experts must be cognizant of the unreliability of AI tools and limit their use. Expert witnesses are retained and offered as witnesses in litigation because their “knowledge, skill, experience, training, or education” can, at least in theory, assist the judge or jury in understanding the evidence to be presented at trial. But expert testimony is only admissible where the expert’s opinion is “based on sufficient facts or data”; “is the product of reliable principles and methods”; and “reflects a reliable application of the principles and methods to the facts of the case” (Fed. R. Evid. 702). In testing the reliability of expert testimony, often courts consider whether the expert’s principles and methods are generally accepted in the scientific or other related professional community. An expert’s use of AI raises serious questions as to reliability and admissibility, with a greater scope of AI use substantially increasing the risk that expert testimony will be excluded. For instance, if an expert witness were to use an AI tool simply to proofread the grammar in an expert report, a court is less likely to find that AI use objectionable. However, if the expert based their opinion on AI-generated evidence, it is far more likely a judge would conclude that the testimony was not based on sufficient facts or data. If AI tools are to be used by an expert, that expert must be prepared to explain in detail how the AI tool is generally accepted in that expert’s field and otherwise meets and requirements for expert testimony.
- Be ready for AI “evidence” in litigation. As seen across most industries, the use of AI is increasing, and the scope of its use is expanding rapidly. The legal industry is no different, with AI-driven research and drafting programs, discovery synthesis programs, and similar tools beginning to flood the legal market. Even if litigants do not use AI tools themselves, they should expect their opponents will. When AI-generated or -enhanced evidence is offered in litigation, the opposing party must be prepared to vet that evidence, including by inquiring about the method and manner of its creation, the person requesting or participating in its creation, whether there were any other outputs generated, what changes were made to prompts to lead to the final result, and whether the evidence was reviewed and approved by a qualified third party expert. When engaging in discovery as to an opponent’s expert witnesses, parties should inquire if the expert witness utilized AI in any form related to the case and inquire of the expert regarding the scope of use and whether the use of AI meets expert admissibility requirements. Only where a litigant is fully apprised of the source and content of all aspects of an opposing party’s case, including all AI-generated and -enhanced evidence—and its use in forming any expert’s opinion—can that litigant present their best position.
- Challenge the use of AI evidence. Litigants also must be prepared to timely object or move to exclude AI-generated or -enhanced testimony or evidence, particularly if that testimony or evidence may be presented in front of a judge or jury. Litigants are strongly positioned to argue that AI-generated testimony is not sufficiently probative of the facts as they occurred and that AI-generated evidence is prejudicial and likely to confuse a judge or jury, justifying its exclusion. For AI-enhanced evidence, litigants should move to exclude such evidence as it is presently considered unreliable in the scientific community. Regarding expert testimony, litigants should challenge any expert opinion where AI was used to assist in the formation or drafting of the expert’s opinion, and those topics should be a primary focus during any expert deposition. The failure to object or exclude the evidence in a timely manner could lead to an unjust result and the loss of an issue for appeal.
This article was previously published in PilieroMazza Insights, September 2025, and is republished here by permission.
Matt Feinberg is an accomplished litigator with over 15 years of experience handling federal and state cases, including civil and appellate litigation, along with arbitration proceedings. As an experienced senior practitioner in PilieroMazza’s Litigation and Dispute Resolution Group and Chair of the False Claims Act (FCA) and Audits and Investigations teams, he has a unique perspective on successful litigation strategies to achieve the best possible outcomes for government contractors and commercial businesses. He is particularly adept at identifying weak spots in an opponent’s case, often leading to successful dismissals or early resolution of disputes, ultimately avoiding an expensive trial.
Mr. Feinberg can be contacted at (202) 655-4177 or by e-mail to mfeinberg@pilieromazza.com.
