Unimpeachable AI Guardrails Reviewed by Momizat on . Ethical AI Parameters for Valuation Professionals In the world of valuation, particularly at the intersection of professional judgement and AI, this 21st articl Ethical AI Parameters for Valuation Professionals In the world of valuation, particularly at the intersection of professional judgement and AI, this 21st articl Rating: 0
You Are Here: Home » Litigation Consulting » Unimpeachable AI Guardrails

Unimpeachable AI Guardrails

Ethical AI Parameters for Valuation Professionals

In the world of valuation, particularly at the intersection of professional judgement and AI, this 21st article of the Unimpeachable Neutrality Series begs the question now facing us: what kind of AI guardrails should exist? And how do we keep them from becoming runaway lanes, or worse, launchpads, into unintended consequences?

Unimpeachable AI Guardrails: Ethical AI Parameters for Valuation Professionals

There is something charming about a good guardrail. Whether you are navigating the hairpin curves of a West Virginia backroad or the shifting terrain of emerging technologies, there is comfort in knowing you will not plummet into the ravine of professional ruin, so long as the guardrail holds. This metaphor became more than theoretical for me in May 2025, when I flipped my BMW between four and seven times (depending on which witness you ask). I broke my arm and suffered a concussion, but thanks to a properly worn seatbelt, well-engineered safety systems, and yes, a guardrail, I am still here, typing this article with a healing wrist and a renewed appreciation for both literal and professional safeguards. I wasn’t speeding. I was following the law. But even when we follow all the right protocols, a deer can still dart into our lane. Professional standards, like guardrails and speed limits, are not intended to eliminate risk, they mitigate the consequences when the unexpected happens. In the world of valuation, particularly at the intersection of professional judgement and artificial intelligence (AI), this 21st article of the Unimpeachable Neutrality Series begs the question now facing us: what kind of AI guardrails should exist? And how do we keep them from becoming runaway lanes, or worse, launchpads, into unintended consequences?

DISCLAIMER: Now, let me be clear. The thoughts in this article are my own, written from the slightly unbalanced perch of someone who loves professional standards so much that I made the subsection sign (§) my business logo. More importantly, I do not speak for the Appraisal Standards Board (ASB), the NACVA Standards Board, the AI and Machine Learning Commission (AIMLC), or the GACVA Advisory Council. While I proudly serve on all four, nothing in this article represents the views of those bodies. This is not a leak, a hint, or a trial balloon. This is me, standing in the neutral middle, calling balls and strikes while the rest of the world debates whether robots should be pitchers.

The Crux: AI Disclosure vs. AI Dependence

Let’s start with the idea that seems so intuitive it practically prints itself on the page: “AI used in a valuation should be disclosed.” Simple. Sensible. Transparent. Right?

But like many things in valuation, simplicity comes wrapped in nuance. Disclosure alone does not guarantee reliability, accuracy, or even understanding. And here lies the rub. If disclosure becomes the regulatory goalpost, without further context or requirements, might it unintentionally signal to some practitioners that it is acceptable to let AI do all the heavy lifting, so long as the model is named, version is stated, and disclosure is made?

Imagine a report that reads: “This conclusion was determined using GPT-6, prompts available upon request.” That is not hypothetical. It is a plausible tomorrow. And therein lies a potential unintended consequence: treating disclosure as permission rather than protection.

The Guardrail Thought Experiment

So, what might unimpeachable guardrails look like for AI use in valuation practice? Let’s conduct a mental walkthrough of some possibilities, not proposals, mind you, just an intellectual exercise.

Guardrail 1: Mandatory Disclosure of AI Use

  • Pro: Increases transparency and supports peer review.
  • Con: May foster overreliance or justify abdication of professional judgment.

Guardrail 2: Prohibition Against Sole Reliance on AI-Generated Analyses

  • Pro: Reinforces the essential role of human judgment and contextual expertise.
  • Con: Difficult to enforce and may stifle innovation or efficient workflows.

Guardrail 3: Certification or Competency Frameworks for AI Use

  • Pro: Establishes baseline understanding and mitigates misuse.
  • Con: May burden smaller practitioners and require ongoing updating as models evolve.

Guardrail 4: Requirement to Retain Human-Readable Audit Trail of AI Use

  • Pro: Facilitates transparency and due diligence.
  • Con: Not all AI systems produce audit trails suitable for regulatory or legal review.

None of these are perfect. None solve every problem. But guardrails are not meant to eliminate every risk, they are meant to help us notice the risks, slow down, and make more intentional turns.

What We Don’t Want: Checkbox Ethics

AI ethics is not a compliance checklist. We must resist the urge to reduce this complex conversation to a binary matrix of “disclosed = good” and “undisclosed = bad.” The ethical use of AI in valuation is not just about transparency, it is about competency, judgment, and accountability. These are the values that keep our profession unimpeachable, not just the AI tools we use.

Just because a model spits out a value conclusion does not mean we are done. Did it understand control premiums? Did it grasp normalization adjustments? Did it flinch at the weight of hindsight bias or double-counting risk factors? We do not just need to know what AI said, we need to know why it said it and whether it made any sense at all.

The Specter of Unregulated AI Assistants

Let’s be honest: AI is already here. Some practitioners use it as a digital assistant; others have quietly promoted it to managing partner. The risk is not that AI will become too powerful, but that professionals will stop asking if they should intervene. Tools should not replace training, nor should technology replace professional judgement.

If a practitioner uses an AI tool to summarize financials, compare guideline companies, or flag inconsistent narratives, that is a far cry from asking it to write an entire conclusion of value and copy-paste it into a report. The key lies not in the function, but in the role, AI is being allowed to play. And this circles us back to training. Adequate, ongoing training in AI is not a luxury. It is the price of admission for anyone looking to responsibly incorporate machine learning tools into their valuation practice. We do not let unlicensed drivers behind the wheel of high-performance vehicles. Should we allow untrained practitioners to wield algorithmic engines without understanding how they work, or when they misfire?

Doubling Down on Principles When the Road Gets Bumpy

When life gets complicated, as it tends to do, both on winding mountain roads and in valuation practice, I have always believed in doubling down on principles. In moments of uncertainty, principles are the traction beneath our tires. And it is in that spirit that I offer another possible approach to mitigating AI-related risk: not by rewriting all the rules, but by reaffirming the ones that have served us for decades. Principle-based standards do not dictate how to perform every task; instead, they offer guidelines, conceptual guardrails, if you will, that steer practitioners toward consistent, defensible, and ethical work. In this sense, principle-based standards are guardrails. They do not tell you exactly which curve to take or how tight to grip the wheel, but they do keep you on the road.

It is important to remember, though, that no set of principle-based standards is completely free from rules, and no set of rule-based standards is devoid of principles. Much like democracy, socialism, or communism, the theoretical purity of each system does not exist in practice. Each framework contains shades of the other, blended by necessity, experience, and evolving needs. Principle-based systems may still require certain disclosures, competencies, or prohibitions. Rule-based systems, too, must lean on interpretive guidance, context, and judgment.

As AI evolves, the path forward may not require building entirely new roadmaps but rather issuing new signs and mile markers to help practitioners navigate the terrain using the same compass we have always relied on of professional judgment, integrity, objectivity, and due care. On balance, standards alone are not sufficient, requiring supplemental guidance and training to mitigate the most dangerous risks. This is because even the strongest principles and best AI tools, like high-performance vehicles, require the skill to operate. Supplemental guidance, in the form of FAQs, Q&As, advisory opinions, or specialized training, may offer exactly what is needed to apply these long-standing principles to modern, AI-infused realities. These resources do not override standards; they illuminate how to use them. Much like a headlamp on a foggy road, they help professionals see more clearly, not steer for them.

Unimpeachably Neutral Consensus Building

If this article accomplishes anything, I hope it is this: to get readers, especially those who may not spend their mornings in valuation standards meetings, to think about what they would want to see in an AI-augmented valuation world. What would make them trust the result? What role should professional standards play in creating that trust?

Are we okay with valuation reports where AI does 80% of the work and a credentialed professional merely signs off? Or do we want to retain a human touchpoint at every material judgment? Should standards allow flexibility depending on the task, or should they delineate bright lines? These are not questions with easy answers, but they are questions worth asking.


Zachary Meyers, CPA, CVA, is the managing member of C. Zachary Meyers, PLLC specializing in litigatory accounting and valuation services. He has been retained in over 2,900 matters since 2011, as a testifying expert, consulting expert, or neutral/court appointed expert qualified in forensic accounting, business valuation, pension valuation, and taxation. Mr. Meyers has held multiple influential roles on national and international standard setting bodies, where he has made significant contributions to the financial disciplines at the highest levels of the National Association of Certified Valuators and Analysts (NACVA), Global Association of Certified Valuators and Analysts (GACVA), and The Appraisal Foundation (TAF).

Mr. Meyers can be contacted at (304) 690-2619 or by e-mail to czmcpacva@czmeyers.com.

The National Association of Certified Valuators and Analysts (NACVA) supports the users of business and intangible asset valuation services and financial forensic services, including damages determinations of all kinds and fraud detection and prevention, by training and certifying financial professionals in these disciplines.

Number of Entries : 2681

©2024 NACVA and the Consultants' Training Institute • Toll-Free (800) 677-2009 • 1218 East 7800 South, Suite 301, Sandy, UT 84094 USA

event themes - theme rewards

Scroll to top
G-MZGY5C5SX1
lw