Why We Are All AI Ethicists Now

AUTHOR'S ATTESTATION: This article was written entirely by Jeff De Cagna FRSA FASAE, a human author, without using generative AI.

In the early days of the COVID-19 pandemic, I attended dozens of webinars and online sessions. This is not an unusual statement for a person whose privileged life made it possible to remain largely sequestered from direct harm during the 2020 lockdown period. Far more unexpected, however, is that I continue to think deeply about (and act on) the words of one webinar presenter. Maria Luciana Axente, serving then (and now) as PWC’s Responsible AI & AI for Good Lead, challenged her audience to accept that we all must be AI ethicists because our futures were on the line as AI development and adoption continued apace, and addressing AI’s ethical problems and questions required the full participation of a genuine diversity of voices and perspectives. 

More recent AI developments underscore the prescience of Axente’s call to action. OpenAI’s introduction of ChatGPT late last year fully unleashed generative AI on society and contributed to a rise in public concern about the consequences of AI use. According to a recent report from the Pew Research Center, “52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.” A notable point in the survey findings is that while the gap between concern and excitement is smaller for young people (ages 18-29) than for people 65 and over, significant AI fears exist across all major demographic groups.

In the association community, there is far more vocal advocacy in support of AI implementation and use than intentional efforts to understand and address the ethical questions raised by AI-enabled products and services. No matter how you contribute to the work of associations, your role as AI ethicist makes it crucial to keep three foundational concerns in mind:

•AI is deeply flawed—Most advocates choose to focus on generative AI’s efficiency and productivity benefits while relegating ethical considerations to secondary or tertiary status. We cannot simply ignore generative AI’s serious flaws, however. For example, generative AI has a strong tendency to “hallucinate,” which is a polite term for making up things that never happened. But as Naomi Klein asks, “Why call the errors ‘hallucinations’ at all? Why not algorithmic junk? Or glitches?” The answer is simple: hallucination creates an illusory human veneer for generative AI, thereby rendering its flaws more normal and reducing our resistance to its adoption.

•AI is the cause of real harms against real people—Generative AI is the source of myriad harms, including ageism and sexism, racism, bias and discrimination, deepfakes and misinformation, and surveillance. While federal and state laws and regulations may eventually provide essential protections to human beings from problematic AI, they are not yet in place and getting to that point will take considerable time and effort. (AI advocates will make sure of that.) Until such protections are enacted, each one of us must push back on the growing insistence that it is acceptable to widely deploy these concerning technologies in the absence of meaningful and effective ethical safeguards.

 •AI is a direct threat to human agency—While the actual danger of so-called “doomer” scenarios for artificial general intelligence (AGI) is vastly exaggerated, UMass Boston professor Nir Eisikovits argues, “…the increasingly uncritical embrace of [AI], in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters, and hone critical thinking.” It is up to us to defend our agency every day by devoting our attention to intentional learning about generative AI, so we are well prepared to resist its expanded imposition by fiat.

For those of us leading privileged lives that (mostly) insulate us from the worst of generative AI’s serious ethical risks and harms, acting on Maria Luciana Axente’s 2020 plea must be a moral imperative. In 2023 and beyond, everyone must be an AI ethicist. It is a solemn responsibility we owe to ourselves, our fellow human beings, and the future of humanity.

Jeff De Cagna FRSA FASAE, executive advisor for Foresight First LLC in Reston, Virginia, is an association contrarian, foresight practitioner, governing designer, stakeholder and successor advocate, and stewardship catalyst. In August 2019, Jeff became the 32nd recipient of ASAE’s Academy of Leaders Award, the association’s highest individual honor given to consultants or industry partners in recognition of their support of ASAE and the association community.

Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com, or on X (Twitter) @dutyofforesight.

DISCLAIMER: The views expressed in this post belong solely to the author

Jeff will discuss this topic in greater detail at his seminar: "Association Boards and Technological Harm" on November 2. The seminar is in-person and virtual.  Learn more and register

Share this post: