We Know Smart Contracts are “SMART”, but are they Ethical? Reviewed by Momizat on . Emerging Technologies, Emerging Ethical Risks This article summarizes points made at the Address at the 2023 NACVA Annual Conference. On-Demand Course: We Know Emerging Technologies, Emerging Ethical Risks This article summarizes points made at the Address at the 2023 NACVA Annual Conference. On-Demand Course: We Know Rating: 0
You Are Here: Home » Practice Management » We Know Smart Contracts are “SMART”, but are they Ethical?

We Know Smart Contracts are “SMART”, but are they Ethical?

Emerging Technologies, Emerging Ethical Risks

This article summarizes points made at the Address at the 2023 NACVA Annual Conference.

On-Demand Course: We Know Smart Contracts are “Smart” but Are They “Ethical”? Emerging Technologies, Emerging Ethical Risks

2023 Keynote Address: We Know Smart Contracts are “Smart”, but are they “Ethical”? Emerging Technologies, Emerging Ethical Risks

It is not the business of the botanist to eradicate the weeds. Enough for him if he can tell us just how fast they grow. –C. Northcote Parkinson, in Parkinson’s Law, 1957.

On July 13, 2023, I was invited to deliver the Keynote Address at the NACVA and the CTI’s Business Valuation and Financial Litigation Super Conference held at the Cliff Lodge in Snowbird, Utah. This article is a synopsis of that presentation.

The learning objectives for this session were as follows:

  • Understand the myriad ethical issues that may arise in the context of emerging technologies.
  • Recognize the fact that machine intelligence and emerging technologies, by definition, are insentient, not subject to rewards or punishment, and hence cannot be meaningfully held responsible or accountable.
  • Assess the fundamental limitation of current professional codes of ethics which only consider professionals who are human beings and do not take account of human-machine interactions, as well as autonomous systems, that can also produce complex ethical scenarios for which no standards or guidance exists presently.
  • Evaluate the dictum that “Ought implies can, but can does not imply ought”—the key bridge idea that links science and technology with ethics and social science (Winston and Edelbach, 2014).

The International Code of Ethics for Professional Accountants (“the Code”) promulgated by the International Federation of Accountants through its International Ethics Standards Board for Accountants (IESBA) contains the following fundamental principles: integrity, objectivity, professional competence and due care, confidentiality, and professional behavior. NACVA’s professional standards are similarly geared towards professional behavior and professional judgment, and place the human being at the center of judgment and decision making in all forensics and valuation engagements. However, all these principles draw upon and make reference to human traits and characteristics that can hardly be ascribed to computers and machine intelligence. Nevertheless, the pace of emerging technologies, especially the advent of blockchain and smart contracts, and more recently Generative Artificial Intelligence tools such as ChatGPT, has compelled consideration of ethical issues arising out of human-machine interactions (the field of “cybernetics” broadly defined), particularly when these are automated using algorithms and code.

A smart contract is an agreement in digital form that is self-executing and self-enforcing. These digital agreements that sit atop a blockchain system (decentralized digital ledger) automatically fulfill themselves when certain terms and conditions are met. While highly efficient for routine and simple transactions, it is possible that murkier ethical issues could be hidden in “algorithmic bias” for instance (O’ Neil, 2016). Ethical issues involve nuanced and complex interpretations of meaning about ethical obligations, and insentient technologies are simply not capable of such analysis that must be done by human beings. Who will trust a smart contract if its intended operation can neither be read nor understood by either a lawyer or an accountant? Smart contracts are incomprehensible to such professionals and other businesspeople since they are written in computer programming code.

Further, ChatGPT-type tools are known to suffer from “AI hallucinations.” Consequently, we need a framework to understand and analyze ethical scenarios that arise through implemented smart contracts and use of AI tools that have ethical implications. A huge downside is the lack of transparency, and hence accountability, because these arrangements cannot be resolved through the court system. Specifically, I painted a scenario about a point soon to be reached when automated systems and their activities will no longer be “auditable” in the traditional sense and we would be at the mercy of these unaccountable systems. Yes, smart contracts are quite smart, but, unfortunately, they may not be ethical.

Some other points I made in my Keynote Address included the following:

  1. In the well-written Preface to his charming book of essays, Conscientious Objections, late NYU professor, Neil Postman (1988) makes the following remarks:
  • After all, who expects a machine to notice its own side effects? To care about social and psychic consequences of its own presence? Machines ask no questions, have no peripheral vision or depth perception. They see the future through the fixed eye of their technical possibilities.
  • Technology is a one-eyed king ruling unopposed amidst idiot cheering.
  • I object to this state of affairs.
  • We need lively discussions of where we are being taken, and in whose interests, by the unfettered development of technology … engineers, not our poets, are unacknowledged legislators of our time … but “Man cannot live by electric wiring alone …”
  • The brain is the only organ that does not feel pain … therefore, the brain does not regard brain damage to be a problem.
  1. The trajectory of the decision-making agent has steadily progressed from the actor solely the human being (including in team contexts), to human beings supplemented by technology tools (e.g., tax or valuation software), to human beings being supplanted by technology and tools with the rise of autonomous systems (e.g., self-driving vehicles). In this sort of end game, we come across the reality of a “bigger monster, weaker chains.” (Stanley and Steinhardt, 2005).
  2. The June 2023 case of Steven Schwartz and the uncritical use of ChatGPT in preparing legal briefs is highly instructive. Rather contrite, Schwartz explained to a Manhattan judge, P. Kevin Castel, “I did not comprehend that ChatGPT could fabricate cases … I continued to be duped by ChatGPT. It is embarrassing.” (Weiser and Schweber, 2023).
  3. The eroding information environment described by the RAND Corporation as “truth decay” has the following characteristics:
    • Diminishing faith in traditionally authoritative information sources
    • Disturbingly increasing trends in:
      • Differences between individuals about objective facts
      • Conflation of opinion and fact in discourse
      • Quantity and authority of opinion rather than fact in discourse
  1. European Union moves to adopt the AI Act, June 14, 2023 (summaries of copyrighted AI training data required).
    • Risk: Extent of regulation depends on risk that goes from “minimal” to “unacceptable.”
    • Privacy Intrusion: Some systems such as facial recognition in public spaces (BBWY = Big Brother’s Watching You!), predictive policing, and social scoring to assign a “health score” are banned outright.
    • Content Influencing: Social media platforms with > 45 million users who can be “content-influenced” are subject to tight restrictions, e.g., FB, Instagram, Twitter (would include Meta’s new platform “Threads”).
    • Transparency Requirements: Systems such as ChatGPT would have to disclose that their content was AI-generated, distinguish deep-fake images from real ones, and provide safeguards against the generation of illegal content.
  2. Kant’s principle “ought implies can” means: if you are morally obliged to do X, then you ought to do X.
    • But just because of technological advances we can, does not mean we ought to! (There may be a moral injunction against it.)
    • This maxim has important ethical ramifications when it comes to technology and social sciences.
    • Algorithmic bias, also “algorithm aversion” (O’Neil, 2016): Algorithms make the decisions, human beings face the consequences. Who do we hold culpable?
    • Technology aversion and automation bias (Fry, 2018): You are accused of a crime. Who would you rather determine your fate, a human or an algorithm? An algorithm is more consistent and less prone to error of judgement. Yet a human can look you in the eye before passing sentence.
  3. Top “information ethics” concerns:
    • Misuse of personal information
    • Misinformation and deep fakes
    • Lack of oversight and acceptance of responsibility
    • Use of AI (e.g., ChatGPT)
    • Autonomous technology
    • Respect for employees and customers
    • Moral use of data and resources
    • Responsible adoption of disruptive technologies

The work of Brian P. Green, Director of Technology Ethics at the Markkula Center for Applied Ethics, Santa Clara University, is highly recommended.

  1. In closing, I urged NACVA’s Ethics Oversight Board (EOB) to take a leadership role and place the important topic of providing guidance to business valuation and financial litigation services (BVFLS) practitioners on its docket. The scenarios of human beings acting alone, supplemented by technology, and possibly supplanted by technology must all form part of the BVFLS profession’s Code of Ethics going forward.

Again, I appreciate the opportunity to share my thoughts on this important topic. As I told Brien Jones, NACVA’s COO, I stand ready to assist NACVA’s newly formed Leadership Initiatives Board (LIB) in its deliberation on these and other important challenges facing the BVFLS profession.

References

Fry, H. (2018). Hello World: How to be Human in the Age of the Machine. Transworld.

O’Neil, C. (2016). Weapons of Math Destruction. Crown Random House.

Postman, N. (988). Conscientious Objections: Stirring Up Trouble About Language, Technology, and Education. New York: Alfred A. Knopf, Inc.

Stanley, J. and Steinhardt, B. (2005). Bigger Monster, Weaker Chains: The Growth of an American Surveillance Society. American Civil Liberties Union Technology and Liberty Program.

Weiser, B. and Schweber, N. (2023). The ChatGPT Lawyer Explains Himself. The New York Times, June 8, 2023. Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times (nytimes.com).

Winston, M. and Edelbach, R. (2014). Humanity and Technology: Global Ethics. Mason, Ohio: Cengage Learning.


Dr. Sridhar Ramamoorti, ACA, CPA, CITP, CFF, CGMA, CIA, CFE, CFSA, CGAP, CGFM, CRMA, CRP, MAFF, is an Associate Professor of Accounting at the University of Dayton, Ohio, and since January 2020, was designated a Sustainability Scholar, affiliated with the UD Hanley Sustainability Institute. Previously, he was on the accounting faculties of Kennesaw State University, Georgia, and the University of Illinois at Urbana-Champaign. His research interests span corporate governance and ethics, risk management, internal and external auditing, and forensic accounting.

Dr. Ramamoorti has a blended academic-practitioner background with over 35 years of experience in academia, auditing, and consulting. Originally trained as a Chartered Accountant from India, he worked for Ernst & Young (now EY) in the Middle East before obtaining his PhD in quantitative psychology from The Ohio State University. Earlier in his career, he was a principal with Andersen Worldwide (1998–2002), the National EY Sarbanes Oxley Advisor (2004–2005), a corporate governance partner with Grant Thornton (2005–2009), and a principal of Infogix, Inc (2009–2010).

Dr. Ramamoorti is co-author of over 60 papers and articles and 15 books and monographs. He has received numerous research grants, and awards for his research as well as for teaching excellence. He is the lead author of the prize-winning book, “A.B.C’s of Behavioral Forensics” (Wiley, 2013) that has been presented to the FBI Academy, and also co-authored a 2020 COSO Research Report on “Blockchain and internal Control” sponsored by Deloitte which has received mention in The Wall Street Journal. He recently became Chairman of the Executive Advisory Board of the MMBA Blockchain Academy (www.mmba.io)

On behalf of the Forensic Accounting Section of the American Accounting Association, Dr. Ramamoorti collaborates with NACVA and other professional organizations to advance research, practice, and education in forensic accounting and valuation. He is a long-time member of the Editorial Board of the Journal of Forensic and Investigative Accounting published by NACVA.

Active in the profession, he is currently a member of the National Board of Directors of Financial Executives international (FEI), an Advisor to the NACVA Litigation Forensics Board, and as Academic Advisor and Board member the Institute for Truth in Accounting. He has served as a Trustee of the Financial Education & Research Foundation (current FERF Chair, 2022–2023), the Internal Audit Foundation, and as Chairman of the Academy for Government Accountability, among other volunteer leadership roles. From 2014–2016, he was a member of the prestigious Standing Advisory Group of the U.S. Public Company Accounting Oversight Board. In the last two decades, he has presented his work at Conferences in 16 countries.

Dr. Ramamoorti can be contacted at (937) 886-8318 or by e-mail to sridhar.ramamoorti@gmail.com.

The National Association of Certified Valuators and Analysts (NACVA) supports the users of business and intangible asset valuation services and financial forensic services, including damages determinations of all kinds and fraud detection and prevention, by training and certifying financial professionals in these disciplines.

Number of Entries : 2611

©2024 NACVA and the Consultants' Training Institute • Toll-Free (800) 677-2009 • 1218 East 7800 South, Suite 301, Sandy, UT 84094 USA

event themes - theme rewards

Scroll to top
G-MZGY5C5SX1
lw