AI, Ethics, and Standards in Valuation Practice Reviewed by Momizat on . Guidance for a Rapidly Changing Landscape Confidentiality remains the first and most immediate area of professional risk. Many AI tools store inputs, transmit d Guidance for a Rapidly Changing Landscape Confidentiality remains the first and most immediate area of professional risk. Many AI tools store inputs, transmit d Rating: 0
You Are Here: Home » QuickRead Top Story » AI, Ethics, and Standards in Valuation Practice

AI, Ethics, and Standards in Valuation Practice

Guidance for a Rapidly Changing Landscape

Confidentiality remains the first and most immediate area of professional risk. Many AI tools store inputs, transmit data to outside servers, or use information to train future models. For valuation professionals handling confidential data, the stakes are high. The author shares how to use AI and adhere to professional and ethical standards.

AI, Ethics, and Standards in Valuation Practice: Guidance for a Rapidly Changing Landscape

Artificial intelligence (AI) has rapidly become part of the daily toolkit for many valuation and financial forensic professionals. Whether used for data cleanup, modeling assistance, industry research, or drafting support, AI can reduce time spent on lower-value tasks and help analysts focus on judgment, interpretation, and communication. If there is one theme that emerged clearly from this year’s National Association of Certified Valuators and Analysts (NACVA) Business Valuation and Financial Litigation Super Conference, it is our profession’s duty to adhere to ethical principles and established standards. In fact, NACVA’s own Artificial Intelligence and Machine Learning Commission (AIMLC) was created precisely to address that need.

In early 2024, NACVA announced the formation of the AIMLC, whose mission is “to demystify the rapidly evolving field of artificial intelligence for our constituents … to synthesize the extensive AI knowledge base into clear, practical guidance, aiding in integrating AI innovations into their valuation and litigation practices in a professional and ethical manner.” The AIMLC aims to be a reliable resource for NACVA boards and the broader membership in navigating the complexities of AI and related software in the business valuation and financial litigation field.[1] In October 2024, NACVA released its Advisory Brief: The Use of Artificial Intelligence and Machine Learning.[2] Since then, the organization has conducted bi-weekly webinars to guide practitioners in applying a variety of AI tools and continues to update their Standards and Ethics Frequently Asked Questions (FAQs) Library[3] to further support members in addressing the complexities associated with AI adoption.

These NACVA initiatives highlight how the profession is actively responding to the dual nature of AI: great promise but also heightened responsibility. An examination of four of the primary standards governing valuation practice, including NACVA’s Professional Standards, the American Institute of Certified Public Accountants’ (AICPA’s) Statement on Standards for Valuation Services (SSVS) No. 1, the American Society of Appraiser’s (ASA’s) Business Valuation Standards and Principals of Appraisal Practice and Code of Ethics (PAPCE), and the Uniform Standards of Professional Appraisal Practice’s (USPAP’s) Ethics and Record Keeping Rules, demonstrates a notable consistency in expectations. Each emphasizes integrity, objectivity, competence, due care, and transparency. When AI is brought into an engagement, even in a supporting role, those same principles govern how we collect data, apply methods, communicate findings, and maintain documentation. AI does not replace human judgment, nor does it exempt us from verifying data or retaining sufficient work papers. Instead, it expands the scope of what must be monitored, documented, and disclosed.

Confidentiality remains the first and most immediate area of professional risk. The AICPA Code of Professional Conduct is unambiguous: client information may not be disclosed without consent. Many AI tools store inputs, transmit data to outside servers, or use information to train future models. For valuation professionals handling confidential data, the stakes are high. Uploading a client’s unredacted documents to an external AI platform is not a trivial matter. Doing so may violate standards, breach client agreements, and undermine the trust our profession depends on. The AIMLC advisory brief underscores the point that confidentiality, transparency, and professional judgment remain the foundation of our work, even when using highly capable technology.

Equally important is the requirement, embedded in SSVS 1, ASA BVS, and the NACVA Standards, that professionals rely on sufficient relevant data and validate all methods, inputs, and assumptions. AI does not eliminate that responsibility; it heightens it. A large language model can generate a polished answer, but it may also fabricate citations, conflate concepts (mixing two or more concepts together), or supply market evidence that appears credible but has no verifiable source. Several recent legal and regulatory incidents underscore why this is so important. In Mata v. Avianca[4] (S.D.N.Y. 2023), attorneys submitted fabricated AI generated citations and were sanctioned for failing to verify their accuracy and exercise professional skepticism. Similar risks exist in valuation. Imagine reporting a discount for lack of marketability based on “industry chatter,” or selecting guideline companies that an AI tool recommends without verifying size, industry match, or financial comparability. The result may be a conclusion that looks well written but lacks credibility and lacks defensible work papers. When we use AI, we have the duty to confirm all sources and verify all information that is provided. AI supported outputs must be treated as computational or research aids, not conclusions.

In another case, Moffatt v. Air Canada[5] (2024), Air Canada was held responsible by the British Columbia Civil Resolution Tribunal for misleading information produced by its own customer-facing AI chatbot. Also in 2024, the Securities and Exchange Commission (SEC) brought enforcement actions against Delphi and Global Predictions,[6] accusing them of “AI washing,” which involves overstating technological capabilities in ways that mislead investors. Although valuators may lean towards understating their technological capabilities and usage versus overstating, the cases highlight a simple truth: AI does not shield a professional or a firm from liability. Professionals are responsible for representations made in their name, regardless of the tools used.

Documentation is central to the profession’s expectations. USPAP’s Record Keeping Rule, along with development and reporting requirements under SSVS 1, NACVA’s standards, and ASA’s standards, obligate analysts to maintain workpapers that clearly explain what was relied upon, how the analysis was performed, and why the conclusion is reasonable. Any analysis performed by or using AI must be supported by appropriate documentation. When AI contributes to our work, whether through data summarization, draft text, or analytical suggestions, you should consider documenting in your work papers the prompts used, the output received, the analyst’s verification steps, and the authoritative sources used to corroborate any material finding.

The solution is not to avoid AI, but to use it deliberately. Consider adopting concise internal policies that define approved tools, set rules for handling data, provide documentation expectations, delineate reviewer responsibilities, and outline training requirements. Engagement letters should include a short disclosure that AI may assist in the process, while clarifying that professional judgment governs all conclusions and that confidentiality and documentation requirements are fully observed. Also consider including language which states that clients and users of our reports are prohibited from uploading any portion of our work into AI models or similar technologies, as doing so may disclose confidential client information. Taking these measures does not burden the engagement, they strengthen it by setting clear expectations and protecting the client, the firm, and the integrity of the analyst’s work.

When used thoughtfully, AI offers significant benefits. It enhances efficiency, supports research, helps synthesize large volumes of information, and reduces repetitive mechanical tasks. It allows valuators to spend more time on what truly matters: exercising judgment, understanding economic reality, and articulating credible opinions. But these benefits can only be realized if AI is integrated into the valuation process with rigor, transparency, and a firm commitment to our professional standards.

The guiding principles are simple: AI may assist, but it may not conclude. The analyst is, and must remain, responsible for the integrity of every opinion expressed. As AI evolves, maintaining this disciplined approach will ensure that our profession not only adapts to technological change but strengthens its credibility in the process. AI changes the workflow, but not the ethical core of the profession. The analyst remains fully responsible for the integrity of the valuation, the security of client information, and the credibility of the resulting opinion.

[1] National Association of Certified Valuators and Analysts. (n.d.). Artificial Intelligence and Machine Learning Commission (AIMLC). http://nacva.com/aimlc

[2] National Association of Certified Valuators and Analysts. (2024, October 29). Advisory Brief: The Use of Artificial Intelligence and Machine Learning [PDF]. https://www.nacva.com/files/NACVA_AI_Advisory_Brief_2024.pdf

[3] National Association of Certified Valuators and Analysts. (2025). Standards and Ethics FAQ Library. https://www.nacva.com/standardsfaq

[4] Mata v. Avianca, Inc., No. 1:22-cv-01461 (S.D.N.Y. Jun. 22, 2023) [Document 54]. Justia. https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1%3A2022cv01461/575368/54/

[5] Lifshitz, L. R., and Hung, R. (2024, February 29). BC Tribunal Confirms Companies Remain liable for Information Provided by AI Chatbot. Business Law Today. American Bar Association. https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/

[6] U.S. Securities and Exchange Commission. (2024, March 18). SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (Press Release No. 2024-36). https://www.sec.gov/newsroom/press-releases/2024-36


Karen M. Lascelle, CPA, CVA, CFE, is a Managing Director with TSS Advisors, LLC. She consults with clients regarding business valuation, corporate taxation, financial attestation, and forensic engagements. She has been in public accounting since 1999. Ms. Lascelle is an adjunct professor at the University of Phoenix and Southern NH University (SNHU). She received her MBA in 2006 from SNHU. She has been awarded the Forbes Top 200 CPAs in America in 2024, 2024 Five Star Financial Service Professional, Outstanding Member by NACVA, and was named a National Top 40 Under Forty by the NACVA. Ms. Lascelle is a member of the Ethics Oversight Board for NACVA and serves on the Artificial Intelligence and Machine Learning Commission.

Ms. Lascelle can be contacted at (603) 357-4882 or by e-mail to karen.lascelle@tss-advisors.com.

The National Association of Certified Valuators and Analysts (NACVA) supports the users of business and intangible asset valuation services and financial forensic services, including damages determinations of all kinds and fraud detection and prevention, by training and certifying financial professionals in these disciplines.

Number of Entries : 2732

©2024 NACVA and the Consultants' Training Institute • Toll-Free (800) 677-2009 • 1218 East 7800 South, Suite 301, Sandy, UT 84094 USA

event themes - theme rewards

Scroll to top
G-MZGY5C5SX1
lw