A Collision Course in Evidentiary Standards for AI-Assisted Financial Forensics
How a New Proposed Rule of Evidence Seeks to Admit AI Analysis and Supplant Experts
On January 30, 2026, Anthropic released legal plugins for its Claude AI that automate contract review, compliance tracking, and legal analysis. Within three days, $285 billion in market value evaporated from legal software and publishing companies. This was not a correction. It was a signal. The AI companies are no longer content selling infrastructure, now they are coming for the legal, financial, and forensic analysis applications themselves. The author discusses the legal and practical repercussions that the new Federal Rule of Evidence would have on financial forensics expert witnesses and the litigation support professionals.
On January 30, 2026, Anthropic released legal plugins for its Claude AI that automate contract review, compliance tracking, and legal analysis. Within three days, $285 billion in market value evaporated from legal software and publishing companies. Thomson Reuters lost $8.2 billion in a single session. LexisNexis parent RELX shed $11 billion.
This was not a correction. It was a signal. The artificial intelligence (AI) companies are no longer content selling infrastructure, now they are coming for the legal, financial, and forensic analysis applications themselves. The market reaction seems to have missed a harder question. It reacted to a determination as to whether AI can perform legal analysis. Anthropic is saying, “Heck, yeah, we can. Just look at our Legal plugin, duh.”[1] But the real question, and a more difficult one, is whether AI-assisted analysis can survive a courtroom. And right now, the answer is a resounding no.
There has been a quiet evidentiary crisis building in this country evidenced by shifting interpretations and updates to federal rules. Rule 702 changed recently to put the burden of identifying and excluding poor expert work squarely on the bench and not to the weight (the jury’s still out on the effect). More recently, the Committee on Rules of Practice and Procedure released proposed Rule 707 in June 2025, which is the first formal attempt to address AI-generated evidence in federal courts. The comment period closes February 16, 2026.
Part C on page 102 contains a section titled Proposed New Rule 707 to Regulate Machine-Generated Evidence for Release for Public Comment and details how to regulate AI-generated evidence for reliability and authenticity. The proposed rule refers to Rule 702, if that evidence is propounded by a witness. The Committee discusses how an amendment to Rule 702 is not workable because it was recently amended and is too general for such a specific subdivision. The Committee has proposed essentially what NACVA’s AI guidance has been saying all along: you own your expert opinion, whether it is a regurgitation of AI or your own analysis. But the Committee also offers an exception to long-standing tests of reliability when AI-evidence is not presented by an expert witness; what an exciting frontier! Rule 707 is not about how an expert should or should not use the output of AI tools, that is already covered by Rule 702. According to the Committee, Rule 707 seeks an exception to allow admission of AI-generated evidence such as “analyzing stock trading patterns to establish causation, analysis of two works to determine similarity in copyright litigation, assessing the complexity of software to determine if code was misappropriated”, offered by (take note) a layman.[2] In the same way a horse-drawn carriage manufacturer does not see the appeal of motorized carts, practitioners reading this likely let out an audible gasp when they realized courts are not on their side; all evidence does not need expert testimony to admit. Rule 707 proposes that the same tests applied to expert testimony can be applied to AI output, subverting the expert altogether.[3]
I would argue, despite being a horse-drawn carriage manufacturer myself now, that such a proposal might be a good thing. Consider recent litigation tests of other software outputs. Blockchain analytics tools that law enforcement has relied on for years are facing unprecedented challenges (finally). In the Bitcoin Fog prosecution, defense counsel attacked Chainalysis Reactor under Daubert, arguing the software had never been peer-reviewed and had no known error rate. The judge admitted the evidence but acknowledged the peer review gap. The conviction is now on appeal, with amicus briefs calling the forensic techniques “fundamentally unscientific”. In another example, the director of investigations and intelligence at another blockchain analytics firm, CipherTrace, attacked Chainalysis’ attributions as unverifiable and massively incomplete, but then Mastercard (owner of CipherTrace at the time) withdrew the director’s report stating that data CipherTrace had relied on, and to which she had testified, was unverifiable, unauditable, and acquired through practices that Mastercard could not support. They did not further specify but I suspect such practices include gathering consumer financial data without notice or consent and then reselling it (as I discussed in an article about blockchain analytics tools in the September 2024 issue of The Value Examiner).
These are not isolated cases. Defense attorneys across the country are building expertise in attacking algorithmic, statistical, and other buzzword-style descriptions evidence obtained through inexplicable methods from software. They understand that clustering algorithms make assumptions. Attribution databases contain errors. I have personally corrected over 30,000 attribution errors in blockchain analytics tools (and counting). Proprietary methodologies cannot be examined. When an expert cannot explain how a conclusion was reached, that conclusion becomes vulnerable to exclusion. This is true whether an expert relied on a “tool”, relied on AI, simply fabricated something, or conducted analysis using poor, inexplicable methods. This is not really new, but adding AI (the marketing buzzword for large language models [LLM]) to this mix starts making the BS a little shinier and a little less smelly.
The solution is easy when an expert or attorney uses AI tools; the onus is on them to show and explain their work. The harder solution is how a LLM’s output gets to the point of admissibility without expert testimony? Proposed Rule 707 seeks to outline the path forward for AI tools and specifies that the output should satisfy the requirements of Rule 702. After all, AI tools are powerful tools that can make legal assistance substantially more accessible and efficient, in a well-designed framework. When a LLM assists in transaction analysis, pattern recognition, or report generation, the evidentiary picture becomes exponentially more complex. An expert knows how to handle that complexity, but just as Redfin, Zillow, and other such sites have reduced engagements for property appraisals, the use of AI tools may reduce the need for expert testimony. Rule 707 seeks to provide a means to do just that, but how does a court assess the reliability of conclusions that even the AI’s creators cannot fully explain? How does an expert defend methodology when the underlying system is probabilistic (i.e., it is capable of generating different outputs from identical inputs)? How does opposing counsel cross-examine an algorithm?
These are not theoretical questions. They are immediate, practical questions that will determine whether AI-assisted forensics becomes a legitimate tool of justice or a liability that contaminates prosecutions and civil judgments alike. And the Rule 707 proposal may propel the use of AI tools forward. Every blockchain analytics tool I use or my opponents use now also contain its own little AI-assistance button. Want us to write that report for you? You got it, these are just blockchain transactions after all, basic facts, right? Not when you add AI to the mix.
Before addressing what AI changes in litigation and how proposed Rule 707 is set to address those changes, it is essential to understand what has never changed in the litigation process. The principles governing expert testimony and forensic evidence are not mere suggestions, they are gatekeeping mechanisms that exist to prevent unreliable conclusions from reaching the trier of fact. And since 2023, the gatekeeper should be excluding bad evidence before it gets presented rather than hearing all of it and letting it go to weight. Let it go to waste instead!
For instance, Rule 702, Daubert, Frye, and other similar tests demand methodology, not credentials. You can have so many letters (even tool-related letters [as some blockchain analytics tools will give/sell you] that may help or hurt depending on the letters), but the crux of admissibility was and is the proper application of a reliable methodology to aid the trier of fact in its understanding. The U.S. Supreme Court established that expert testimony must rest on reliable principles and methods, not a character count of expensive acronyms behind your name. A forensic tool that produces impressive visualizations but cannot articulate its error rate may eventually fail. The eventually part is where we are right now; some have failed in court and some have prevailed (and when they prevail, they start using marketing words like “accepted in court” and “forensically sound” when the tool itself is neither). Can a layman simply buy the tool and present its output as evidence? Rule 707 says maybe, but it also defers to Rule 702, which generally says credentials and tool citations do not substitute for explainable methodology.
Transparency is non-negotiable in litigation. Cases are dismissed when the sources and methods supporting evidence are withheld. Trade secrets offer no protection in the courtroom. When a forensic tool’s methodology cannot be examined because it is proprietary, that tool invites exclusion. An expert who cannot explain how conclusions were reached cannot defend those conclusions under cross-examination (tool or no tool). I also note that the tool(s) charging users hundreds of dollars an hour to produce evidence supporting the attribution it already sold to you is a cash grab, at best. And if that evidence is “another user told us so”, get your pearls ready for clutching.
Completeness of records, whether provided to you or obtained by you, must be established and supported by detailed methods, and are not simply assumed. Financial forensics has always required practitioners to demonstrate that records reviewed are complete and sufficient for the conclusions drawn. Blockchain’s pseudonymous nature makes this requirement more demanding, not less. The expert must define for the court what “complete and sufficient” means for blockchain data and acknowledge limitations.
Maintaining a chain of custody is not just for fibers and hair, it also applies to digital evidence. Every transfer and transformation of data must be documented. Original data must be preserved in a manner that is provably sound (the methods for gathering and preservation is what makes something forensically sound, not the tool used or file type itself). The path from a raw blockchain transaction to your courtroom exhibit must be traceable, reproducible, and defensible. Once you push that AI-assistance button, you risk losing that reproducible bit (so use it correctly). Once you apply an inventory accounting method without corroborating or supporting evidence for the application of that method (looking at you, LIFO for crypto wallets), you may lose that defensible bit. Of course, bad evidence wins some of the time. It wins when there is not an adequate defense and sometimes it wins when there is. Rule 707 seeks to apply the evidentiary tests for expert testimony to AI outputs and allows a court to accept AI assistance in lieu of expertise.[4]
Each of these evidentiary requirements did not emerge from an overbearing hostile approach to technology. They emerged from careful consideration of technology through decades of forensic disciplines failing courtroom scrutiny because practitioners prioritized conclusions over methodology. DNA evidence, fingerprint analysis, bite mark comparison, hair microscopy, and digital forensic images of devices all faced exclusion when courts demanded proof of reliability rather than a silly acceptance of proprietary authority or an expert’s shrug.
What are the Expected Changes in Litigation as a Result of AI tools?
AI-assisted financial forensics will absolutely face the same exclusionary tests, when presented by an expert. Rule 707 clearly seeks to apply the very same tests to evidence when presented by a layman (or generated by the attorney to support their pleadings). It states in full the following:
“When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a-d). This rule does not apply to the output of scientific instruments.”
Will the field of AI-assisted analysis be prepared for those tests or will they fail? If you put an inexplicable bundle of “because he/she/it said so” on the stand, it will fail (formerly referred to as a black box), be it an expert or an AI output. AI and LLMs introduce challenges that no previous forensic tool or methodology has presented. The Committee on Rules of Practice and Procedure recognizes this fact and that is probably why they want to make a whole new rule.
It comes down to an explainability gap. For instance, a calculator performs arithmetic that any competent examiner can verify and a database query returns results that can be independently confirmed by repeating the query. However, when a LLM identifies patterns, generates summaries, or assists in analysis, the reasoning process is opaque (i.e., “because I said so”; this answer does not fly with teenagers and it is not likely to fly in litigation). Neural networks do not follow articulable rules and are, by definition, a black box. In fact, if it was not a black box, it would not be AI/neural network/LLM/your bestie. These tools identify statistical patterns in training data that may or may not generalize to the specific facts at issue. This means you need to understand how the tool works. The tool was designed using massive (unimaginably huge) amounts of data and it uses that data to derive patterns. That data is generalized and the original data may be a completely different fact pattern from your case.
Think of it in this oversimplification. You are in front of the judge at sentencing for a traffic ticket. The guy before you was sentenced to seven years for something else. If the AI judge was trained on just the previous sentence, your sentence is statistically likely to be seven years! It may not matter to the AI judge that the fact pattern of your case differs from the previous case. That is what “statistical pattern” means and the output of any tool is completely, utterly, irreversibly dependent on the training data. Do you know what your AI learned today? No! Do you know how it will use what it learned in your matter? Also, no! You do not know what the training data sets are unless you built it yourself, and if you started with someone else’s LLM, you still do not know it all. You will never know how it uses that training data (by definition, if you can know, then you are not using an LLM). When I am asked how, I can say things like, “based on my knowledge, skills, and experience as an expert”. AI-Dorothy cannot say that (yet).
Suppose you have controlled this in your AI tool and you fed it only the most delicious cryptocurrency fraud patterns perpetrated on the Bitcoin blockchain. You would not then ask it to analyze a romance scam perpetrated using Ethereum and Tron; that is absurd. That is also what happens when you press the AI button without understanding its underlying knowledge or capabilities. The path forward for AI tools is not complete avoidance, but it is also not a well-lit or paved route either. How can Anthropic better assure me that outputs from its Legal plugin are reliable? (I have answers for this, but I also have a consulting fee.) How can I, in turn, assure the trier of fact in my testimony? How about we take me to the train station instead and write me, the expert, out of the storyline altogether? [5] (Again, I think this can still be a good thing.) How can Anthropic (or any other AI tool) assure its user, who is not an expert and is presenting AI output to a trier of fact, of that output’s reliability? How can AI tools empower their users to enable an accessible justice system? Proposed Rule 707 says to do what experts do, but AI companies do not have an advisor experienced in civil litigation matters that heavily rely on emerging technology outputs on their payroll.[6] It is starkly clear from their current offerings that they have not considered exactly how to enable any user, expert or otherwise, to defend outputs.
Under Rule 702, Daubert, Frye, and other expert test precedents, experts must explain the principles and methods underlying their testimony. What are those principles when AI contributes to or performs the analysis? “The model identified this pattern,” is an assertion equivalent to “because I said so”. I can see the eyes of the trier of fact narrowing and their head turning slightly to the side in a preponderance of exclusion.
The methods applied must also be reproducible. That is the point of the scientific method from primary school: guess, test, repeat (or hypothesize, test, repeat if you want to get pretentious about it). Reproducibility is inherent in something that is scientifically reliable; that is why it is called science. The same methodology applied to the same data should yield the same results. LLMs are probabilistic (i.e., they rely on probability). Identical prompts generate different outputs, model versions change frequently, and training data evolves continuously. This is true for free versions, for paid versions, and for offline DIY versions; in fact, it is true of LLM by definition. This means that admissibility rules will change to address and accept the growing capabilities of AI tools.[7]
An AI-assisted analysis performed in January may not be reproducible in September when the case goes to trial. How should AI tools evolve and how should courts evolve to make this basic fact of how AI works acceptable anyway? Notice that the onus is shifting away from being just on the user (expert or not); now, the courts are considering a change and the tools, I expect, will change to meet those requirements (if the proposal becomes a rule).
If the analysis cannot be reproduced, how can it be verified? If it cannot be verified, how can it be defended? AI systems learn from data, and that data carries assumptions, biases, and gaps. A model trained primarily on Bitcoin transactions may perform poorly on an Ethereum transaction set. A system optimized for detecting ransomware payments may miss romance fraud patterns entirely. The forensic expert must understand what AI learned from, whether that training applies to the case at hand, and what blind spots might exist. Most current tools do not disclose this information (for good, secret sauce reasons and all that). Most current practitioners do not know what to ask. If they do know how to ask and have miraculously overcome the chatbot labyrinths, they still may not get a useful response or worse, they get a response that clearly illustrates the inadmissibility of the output, which is a waste of time and money for everyone.
LLMs also drop acid. They hallucinate. They generate plausible-sounding content that may be entirely fabricated. In May 2025, Anthropic’s own legal team discovered that Claude had hallucinated fake author names and titles when generating legal citations. This is something legal and expert professions know very well (it has been the main water cooler topic for more than a year now). NACVA has been tracking and educating practitioners about how to verify and avoid fabricated things through its AI Data University (this should have been obvious, but it speaks to the improper reliability presumptions that people apply to technology).[8] The recent missteps serve as a reminder that you must understand the capabilities of your tool. Do not forget that AI can fabricate citations, it can fabricate transaction patterns, can fabricate entity relationships, and can fabricate analytical conclusions. So too can humans. If your drug-addled brother-in-law wanders into your office, picks up a case folder, then pronounces that he has determined, from statistical pattern analysis, that it was Professor Plum in the Conservatory with the Candlestick, you are not likely to simply regurgitate that pronouncement as your own expert opinion (depending on your work ethic, I suppose). [9] But you do not have to fire your brother-in-law and you, similarly, do not have to outright reject AI tools. There is a path forward for him: maybe get him clean first, get him some training and experience, let him practice a bit.[10] Blind acceptance and regurgitation may work for baby birds; it rarely works for humans and it is not likely to work for AI tools either.
The forensic practitioner who relies on AI outputs without independent verification is building a case on a foundation that may not exist due to mismatched fact patterns or due to digital LSD. Verification is not new to the forensic practitioner. It is new to a layman. Courtroom defensibility and admissibility is a new frontier for AI tools.
So, what is required of courtroom-defensible AI analysis? The gap between current practice and courtroom requirements is vast. Closing it demands a framework that preserves AI’s analytical power while meeting the evidentiary standards courts require. You can shape those requirements with your comments on the proposed Rule 707; do not go whining about it later if you choose not to provide input. The AI system must function as a tool that assists analysis; it must not be used as an oracle that generates conclusions.[11] As before, the expert, not the algorithm, bears responsibility for opinions offered to the court and Rule 707 may provide some additional mechanisms to deem an AI tool reliable.
I will waive my consulting fee just this once and offer the following considerations for practitioners, subject to NACVA’s advisory brief on the same:
- Independent verification of AI-generated findings through traditional methods (you are the expert, after all)
- Understanding of the AI’s methodology sufficiently to explain it under cross-examination (this is between you and opposing counsel, who must also understand it well enough to push back on your BS answers)
- Documentation of instances where AI outputs were rejected or modified based on professional judgment (do not be scared, be prepared)
- Clear records distinguishing AI-assisted analysis from independent expert conclusions (this one may be an ethical consideration more than anything else)
- An expert who testifies “the AI determined this” has testified to nothing and risks exclusion based on a failure to apply expertise (the judge uses AI too and if he can do it himself, what does he need you for?)
Consider replacing, “because I/my AI tool/my bestie/my brother-in-law said so”, with the following, “I determined this, using AI to assist my analysis in the following documented ways, subject to the following validation steps”. That just may be defensible testimony but you need cooperation from AI tools to prepare that documentation and validation.
Writing the Expert Out of the Storyline[12]
Ok, I will waive my consulting fee just one more time and offer these suggestions for Anthropic Legal and whichever other AI company is interested. Here is how you get the horse-drawn carriage (me, the expert) off the road and make way for your self-propelled motorized vehicle (AI output accepted in court as though it is me). Proposed Rule 707 is a path for admitting AI analysis without an expert. Consider providing the following to overcome the defensibility and admissibility hurdles:
- The specific AI system used, including version, configuration, and access date
- The exact prompts or queries provided
- The raw outputs received, preserved in original form
- Any post-processing, interpretation, or modification applied
- The relationship between AI outputs and final conclusions (this one is more of a user input than an AI output, but it is also something users will need)
A brief note to AI companies: I understand this may sound intimidating for small, scrappy startups such as yourselves (looking at you, OpenAI, Microsoft, Google, Anthropic, DeepSeek, and others). It may seem as if I am asking for your secret sauce ingredients. I am not. And even if you jotted them down, we do not use the same mixers.[13] What I am suggesting are the basics, just the little things, that a user needs to avoid outright rejection or surreptitious use.[14] You do not want to be the back-alley AI, do you? If you are offering a “legal” plug-in, consider adding this lagniappe to make it genuinely legal, rather than barely legal.
This documentation enables opposing parties, opposing experts, opposing counsel, and the trier of fact to examine methodology. It allows courts to assess reliability and protects against accusations of result-driven (aka biased) analysis. It also enables reproduction or reveals when reproduction is impossible, which is itself important information. Recognize that AI tools, like all tools, have limitations and appropriate uses (you would not use a lawn mower to clean your pool). Forensic experts have always been required to acknowledge the boundaries of their opinions. The use of a new tool, such as AI, demands heightened attention to this obligation (just as your brother-in-law would).
Expert reports and testimony, and if Proposed Rule 707 becomes Rule 707, AI tools should contain explicit statements addressing the following:[15]
- What questions the AI can answer versus what requires human judgment (a bifurcation of scope between the AI and the expert or a clear, limited scope of the AI tool)
- Known failure modes, biases, or blind spots in the AI tool (does it know what it does not know)
- The extent to which conclusions depend on AI outputs versus independent analysis (pull that sycophantic lever all the way down to zero)
- Assumptions embedded in the AI’s training or design that affect the specific case (the keywords that separate this from secret sauce are “specific to the case”)
- Confidence levels and uncertainty ranges where applicable (but then also explain those things in a way comprehensible to the trier of fact)
Acknowledging limitations is not weakness; it is the hallmark of reliable testimony. Courts distrust experts who claim certainty where uncertainty exists and they respect experts who define precisely what they know and what they do not. It is ok to say, “I don’t know.”[16] AI findings must be validated through traditional forensic methods wherever feasible and applicable. For instance, if an AI tool identifies a cluster of addresses as belonging to a single entity, can that clustering be confirmed through co-spend analysis, timing patterns, behavioral indicators, or some other corroboration or support? If an AI tool flags a transaction pattern as indicative of layering, does manual review support that characterization? These efforts toward validation provide independent confirmation of AI outputs and demonstrate professional judgment rather than algorithmic dependence.
As always, keep it simple. Courtrooms understand financial statements. They understand chronologies. They understand flow-of-funds analyses. They do not understand algorithm outputs, confidence scores, or clustering visualizations without translation, explanation, or support. AI analysis must produce outputs in formats courts can evaluate (see the above considerations). Creating documents that trace asset movement from source to destination, establishing disposition and value at relevant dates, and presenting findings in language accessible to judges and juries is my secret sauce. The sophistication of the underlying analysis (however amazing or however bad) is irrelevant if the presentation is incomprehensible.
The companies building AI tools for legal and financial applications face a difficult decision. They can continue developing for capability with a focus on faster processing, broader coverage, and more impressive demonstrations. Or they can develop for defensibility with a focus on transparent methodology, measurable accuracy, and reproducible outputs. These are not mutually exclusive, but there are trade-offs.[17] The market rewards capability but the courtroom demands defensibility, and if you fail publicly enough times, the market will recognize those courtroom failures and react. Transparency must become a design principle. Tools that cannot explain their reasoning will face increasing exclusion. The black-box approach acceptable for consumer applications is grossly inadequate for forensic use. Explainability cannot be an afterthought, a bolt-on solution to something that needs its own thing; instead, it must be an architectural consideration.
Enabling Forensic Adequacy of AI Tools
To enable forensic adequacy, AI tools should consider measuring and disclosing error rates for various methods so a user can more easily illustrate the reliability of applied methods; this would require a disclosure of the method. Consider including accuracy metrics such as false positive and false negative rates across different use cases as a regular, documented measure available to users.
Consider maintaining version stability.[18] When an AI tool contributes to or performs forensic analysis, that analysis must be reproducible months or years later when litigation concludes. Developers may maintain version archives and provide practitioners with the ability to reproduce historical analyses exactly as before, or in some way document the changes over time so that version instability can still be explained and defensible.
I suggested it before but I will reiterate, consider documenting training data provenance and providing training data information to users. Users do not need your complete secret sauce, but they do need something that enables them to defend AI outputs. Forensic tools should disclose what data trained their models, what assumptions that data carries, what biases may have been identified, and what limitations result. A practitioner cannot assess reliability and a user cannot defend their use without understanding what the AI tool learned or how that learning might impact the output.
AI companies that build for defensibility will likely dominate the forensic market, not as sales to experts but as sales to everyone else in lieu of hiring an expert. Building for defensibility does not inhibit the strength of an AI tool, it is not cowering to tech-hostile courts. Courts are not hostile to technology, they are hostile to unreliability and a lack of transparency, and they have spent centuries distinguishing technology from unreliability in an effort to accept and admit technology.
Conclusion: A Call to Action
Here is the uncomfortable truth: the people building AI forensic tools generally do not understand courtroom evidentiary requirements. And the people who understand courtroom evidentiary requirements generally do not understand AI capabilities and limitations. This expertise gap is dangerous. For instance, some engineers aim to optimize technical performance without understanding that impressive outputs mean nothing if they cannot survive cross-examination.[19] Attorneys recognize evidentiary problems but many lack the technical sophistication to specify solutions. Forensic practitioners trained in traditional methods struggle to evaluate when AI assistance is appropriate and when it introduces unacceptable risk.
Bridging this expertise gap requires a rare combination: deep technical understanding of how forensics tools and AI tools work, combined with extensive courtroom experience understanding what courts actually require, combined with tested experience explaining complex technical outputs to triers of fact, combined with the professional standing to establish standards others will follow. (Oh, is that all?) This combination does not emerge from academic study alone.[20] It emerges from years of forensics practice, of testifying, of overcoming challenges, of refining methodology, of defending new methodologies, of explaining emerging technology, of understanding what survives scrutiny and what fails, and of understanding not just the rules but how courts are likely to apply them. AI companies need practitioners who have done this work, who have established precedents, who have written the methodologies that others cite, who have trained law enforcement, attorneys, and fellow practitioners, who have faced challenges and prevailed, who understand forensics as courtroom practice with stakes and consequences. Bridging the expertise gap is very similar to bridging the knowledge gap between experienced, retiring employees and new hires when there is no manual, no handbook, and no standard operating procedures.
What Happens if You Do not Act (even though I just said you should)?
The practitioners I am calling upon herein (you know who you are) must participate in this discourse because they hold knowledge, skills, and experience that cannot be replicated by AI, cannot be acquired through certification, and cannot be purchased through acquisition. Bridging the expertise gap requires careful consideration.
The legal system’s integrity depends on evidence that is reliable, on methodology that is transparent, and on experts who are accountable for their analysis. AI threatens none of these principles, but AI deployed without discipline threatens all of them. Proposed Rule 707 is an attempt to bestow discipline without the need for an expert.
The cases being built today using AI-assisted analysis will face challenges tomorrow. Some will survive. Some will not. The ones that fail will not fail quietly and they are likely to generate precedents that have the potential to constrain the entire field, headlines that undermine public confidence, and reversals that free the guilty while tainting the innocent (or jail the innocent). The window for establishing standards is narrow and closing. Rule 707 comments close on February 16, 2026. Precedents are being set in courtrooms right now. The frameworks established in the next 12 to 24 months will govern AI forensics for a generation. The question is not whether standards will emerge. The question is whether those standards will be established by practitioners who understand both the technology and the courtroom or by courts reacting to repeated failures only after the damage is done.
AI will transform financial forensics. This transformation is not optional, not reversible, and not distant. It is happening now. The tools are powerful. The applications are compelling. The market demand is overwhelming. But capability without reliability is dangerous, and the courtroom remains the ultimate test of reliability. This field needs frameworks that capture AI’s benefits while preserving evidentiary standards. It needs practitioners who can bridge the expertise gap between technologists and attorneys. It needs standards established proactively, and not precedents established reactively.
I say this in blockchain forensics courses frequently and it applies to AI as well: technology changes, the underlying principles do not. The challenge is applying enduring principles to emerging capabilities before the courtroom sets a limiting precedent based on a select few using AI tools poorly (inadvertently or deliberately). The expertise to do this exists. The key question is whether the institutions building AI forensic tools recognize what is required, whether qualified practitioners will provide valuable input despite the risk of writing themselves out of the storyline, and whether either group will act before the court inevitably acts for them.
[1] This is not a direct quote. It is a fabrication imagined in a scenario where Anthropic releases a Legal plugin for their AI tool. Oh wait. They did.
[2] Pg. 102, Excerpt from the May 15, 2025, Report of the Advisory Committee on Evidence Rules.
[3] If you have not done so already, perhaps reach up ever so slightly and clutch your pearls. The rule is not worded in this way and frankly, it is not the court’s responsibility to consider their impact on my income.
[4] This is not how Proposed Rule 707 is phrased, but it may be a consequence.
[5] The train station is a reference to a popular television show, Yellowstone. I do not advocate violence.
[6] At least, that I could find from my brief research, but also, why would they? Rule 707 is a proposed rule, not a rule.
[7] This is not a prediction, this exact proposed change to federal rules is open for public comment right now.
[8] https://aidatauniversity.com/
[9] This is absurd on its face because who would bring a candle into the conservatory in the first place.
[10] These statements are in no way related to my actual brothers-in-law.
[11] Besides, everyone knows the oracle lives in an apartment in Australia.
[12] The train station is a reference to a popular television show, Yellowstone. I do not advocate violence.
[13] This is a reference to the restaurant equipment, not to the asset obfuscation techniques and services.
[14] I recognize that a practitioner’s surreptitious use is still revenue-generating and likely of little concern to the seller.
[15] I also defer to NACVA’s Advisory Brief on this issue: https://www.nacva.com/advisorybrief
[16] Obviously, that should not be the entirety of your testimony.
[17] These are most similar to the trade-offs between security and transparency.
[18] Know that I make this suggestion with the full knowledge of a software engineer and of a teaching software engineering. I know what the words mean and I chose to say them anyway.
[19] They have also probably never been cross-examined.
[20] Know that I say these words as a former teacher and professor. I know what the words mean and I chose to say them anyway.
Dorothy Haraminac, MBA, MAFF, CFE, PI, provides financial forensics, digital forensics, and blockchain forensics under YBR Consulting Services, LLC, and teaches software engineering and digital forensics at Houston Christian University. Ms. Haraminac is one of the first court-qualified testifying experts on cryptocurrency tracing in the United States and provides pro bono assistance to victims of cryptocurrency investment scams to gather and summarize evidence needed to report to law enforcement, regulators, and other parties. If you or someone you know has been victimized in an investment scam, report it to local, state, and federal law enforcement as well as federal agencies such as the FTC, the FCC, and the IRS.
Ms. Haraminac can be contacted at (346) 400-6554 or by e-mail to admin@ybr.solutions.

