Contribution to the Debate on AI and Ethics—First Quarter 2023
NACVA Ethics Oversight Board
There is growing consensus and debate about the revolutionary role that artificial intelligence (AI) will bring to society, the economy and the planet. Global leaders across multidisciplinary backgrounds are determined to sustainably embrace those changes that AI brings and look for new solutions that will help combat some of the problems plaguing modern society on a large scale, for example fraud prevention, disease treatment and prevention to name a few. These leaders seek to provide personalized service offerings while creating unique patient and consumer experiences. In my perspective, AI needs to be human centric, and it needs to enhance human interactions not replace them.
In order to ensure a prosperous socioeconomic and environmental evolution based on AI, its development and use should respect ethical principles as currently outlined in NACVA’s Professional Standards[1] which is a cornerstone and guide for practitioners and professionals.
Reflections about an ethical AI
An ethical AI world requires the application of ethical principles from the development to the use of AI. AI is like any other new technology; its true value lies in its application not in the technology itself. AI4People[2], is a forum composed by academics and experts in AI and ethics which have proposed the following principles as the foundation for an ethical AI:
- Beneficence (doing good) – this refers to promoting well-being, preserving dignity, and sustainability.
- Non-maleficence (do not harm) – this refers to ensuring privacy, security, and ‘capability caution’.
- Autonomy – this refers to ensuring the power to decide (supporting people to make decisions).
- Justice – this refers to promoting prosperity and preserving solidarity (non-discrimination).
- Explainability – this refers to ensuring transparency and accountability (users should know ‘if’, ’how’, and ‘why’ an AI system suggested one outcome over another).
“Explainability” is required to support transparency and accountability in AI. The way in which a model determines its outcomes should map onto the audiences’ world model or it will not be comprehensible. This is different for various audiences and so no one approach will work for all applications. It is paramount that the outputs of the algorithms can be properly understood by non-technical audiences, which is necessary to evaluate fairness and gain trust.
Key considerations for AI applications:
- Utilize data as a key enabler of AI through the development of meaningful datasets in quantity and quality ensuring and enabling fair and ethical AI ecosystem.
- Create social inclusion and cohesion by improving educational and professional training systems within the workforce to increase literacy.
- Promote self-regulation by creating a ‘holistic’ approach including the establishment of guiding ethical principles to be applied holistically throughout organizations.
- Develop metrics of trustworthiness of AI products and services which serves as the basis for a system that enables the user-driven benchmarking of AI offerings.
- Eliminate liability though encouraging AI developers and users to understand the key issues and tools to mitigate risk for end users and patients. As society continues to pilot, adopt, and rely on AI technologies to reshape the future of decision making, AI that can be trusted to be transparent, fair, explainable, and secure is imperative.
- Access to market which refers to supporting an efficient application of existing framework of rules and regulations to validate, authorize, and certify AI-based products, for example through bringing AI expertise into regulatory agencies.
- Be aware of privacy considerations. There are aspects of AI that are of relevance for privacy. Some systems utilize personal data, while other systems use data that cannot be linked to individuals. If personal data is utilized, appropriate consent must be obtained, and use must be in line with the purpose for which it has been collected.
- Encourage international cooperation among policy makers at international level on ethical guidelines, helping to ensure an inclusive and global approach.
In closing, to achieve and thrive in an ethical, trustworthy, and sustainable AI rich world, it’s crucial to have an appropriate and agile policy framework and manual which fosters innovation; builds a data culture that enables AI while ensuring personal data privacy protection; supports social cohesion by educating people and upskilling the workforce while promoting ethical behavior.
Reneé Fair, CVA®, is a Certified Valuation Analyst (CVA) and managing partner and Co-Founder of Trustee Capital LLC, an independent valuation and analytics automation consulting firm based in Tampa FL. Trustee Capital LLC specializes in business valuation, data visualization, analytics, and robotic process automation. Mrs. Fair was elected by NACVA membership to the National Association of Certified Valuators and Analysts (NACVA) Ethics and Oversight Board in 2021.
Mrs. Fair can be contacted at (813) 397-3648 or by e-mail to rfair@trusteecap.com.