Artificial Intelligence and Diversity, Equity and Inclusion
Dec 06, 2023The presence and proliferation of AI in our lives is palpable, its evolution staggeringly fast, its consequences unpredictable, and the dangers and opportunities vast. This includes the dangers and opportunities for Diversity, Equity & Inclusion (DEI).
According to Britannica [1] Artificial intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.” It further states that “some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, voice or handwriting recognition, and chatbots."
The 'A' in AI may as well stand for 'algorithmic', as at the heart of its “intelligence” is the use and modification of algorithms by machines to apprehend situations, evaluate them, and recommend or take specific decisions. As the authors of What do we do about biases in AI point out: “We must responsibly take advantage of the several ways that AI can improve on traditional human decision-making.”[2] They continue that “with more advanced tools to probe for bias in machines, we can raise the standards to which we hold humans.”[3]
AI AND DIVERSITY, EQUITY & INCLUSION
In a recent LinkedIn Newsletter, influential DEI leader Dr. Rohini Anand reviews the benefits and promises of AI to advance DEI [4]. She outlines the obvious advantages of AI for the pursuit of DEI, namely that it enhances accessibility, translation, automate routine tasks and drive productivity and efficiency. AI can innovate training (including DEI training), greatly improve HR analytics, make research fast and effective (including DEI benchmarks) particularly for resource constrained DEI teams, and assist in de-biasing communications and recruiting.
Anand's review is balanced as she points out the flags and watch-outs for DEI professionals. She notes, for example, that in terms of translation, AI is lacking in capturing linguistic nuance and cultural context. In terms of de-biasing, she is clear that “because AI draws on existing data that reflect our biases, often the alternatives suggested are also biased and require human intervention to weed out the bias and to ensure that the language is appropriate for the context.”
In the end, she concludes that “AI can be a tool to enhance, support and scale organizational DEI efforts in an efficient, scalable, creative and cost-effective way– not replace DEI professionals. If we are to make progress in DEI, we need to be diligent and not allow AI to perpetuate or amplify historic discrimination. In order to do that, DEI professionals need to re-tool themselves to understand AI and be able to ask the right questions. Questions like: What data are being used to train AI? How are we auditing the AI tools?”
This is good advice for DEI professionals! It is also woefully optimistic about their influence in the organizations and institutions that they serve. I suspect that overestimating our influence may only be the first of several awareness gaps in our quest to understand the impact of AI in general and the consequences for DEI work specifically.
GAPS IN OUR AWARENESS OF AI AND DEI
Gap #1: Overestimating the influence of DEI professionals in organizations and institutions
In many organizations, DEI is not funded or resourced to the level required to have the impact it seeks to make. Often the gap is stark between what the websites, recruiting messages or ad campaigns promise the employee experience to be and the cultural reality of the organization. If DEI is as important as these promises often state, DEI would be more central to the Talent/HR function and systems and inclusive leadership would be practiced across all levels of the organization.
Very few organizations can claim that their description of their culture matches the experienced reality. Before DEI professionals can ask the important questions Dr. Anand suggests, they need to ask themselves whether they are sufficiently empowered and positioned to ask these questions, and, more importantly whether they are expected and supported to shape the answers. Will those who decide how AI is used actually listen and take seriously the concerns and suggestions of DEI leaders? If not, how can we create the conditions for a constructive engagement with DEI?
As is often the case, the organizations that have invested in building an authentic DEI culture and made the practices of Inclusive Leadership core to their cultural development stand to benefit from their sustained focus and investment. This is particularly true now since the dizzying speed of AI development and proliferation is only poised to accelerate. Indeed, DEI leaders may not have sufficient time to build their credibility and champion their structural empowerment within an organizational system. It matters, for example, whether DEI is buried somewhere in the HR/Talent function (as is most common) or whether DEI is a separate function matrixed across others, including HR/Talent. The former contains DEI as one of many elements of HR and subordinates it to the HR/Talent strategy. DEI as a separate function supports an enterprise-wide scope of DEI that cuts across all functions in order to enable alignment with a broader DEI commitment that includes how an organization goes to market, shapes its brand, designs, develops, sources, and delivers its products and services and how it innovates. The former tends to support a rather narrow definition of DEI, whereas the latter supports a strategic and broad understanding of DEI.
Gap #2: The optimism and positivity bias around AI
It would be surprising and perhaps even irresponsible to their shareholders, customers, and employees for business leaders to ignore AI or “pause” it, as some prominent voices demand [5]. The ideology of perpetual growth, progress, and competition so strongly pervades the logic of business that we do not question it. [6] And why should we? So far it seems to have worked. So far, technology has delivered improvements and “progress” that has benefitted increasing numbers of people. Why should AI be any different?
As a result of this rather reasonable position, we proceed with building AI solutions and embed AI deeper in products and services - delighted with the cost savings, the speed, the promise of eliminating routine tasks, and even the power to produce art and entertainment.
And yet, many of us sense that we are indeed at an historic inflection point that may not quite lead to the future we imagine. This is a lesson from history worth remembering- namely that the future has never been what we imagined it to be at the dawn of new technologies. Even though our imagination and predictive abilities have proven quite flawed, most of us have learned to implicitly associate new technologies with the notion of improvement. We enthusiastically trade in old technologies for new and readily upgrade them in expectation of “better, faster, easier”.
If, however, the future is no longer what it used to be, as Oxford-scholar Jörg Friedrichs [7] argues (because we can no longer rely on the comforting assumption that it will resemble the past), we are challenged to critically examine our own optimism about AI and challenge our underlying assumptions. This might indeed be a suitable and appropriate mission for DEI leaders and practitioners, as we should be well rehearsed and prepared to spot, challenge, and mitigate biases; i.e., implicit assumptions. Given the speed, scale, complexity, and interdependence of challenges facing our societies, communities, organizations and institutions, it seems imperative to challenge the certainties of our beliefs, the accuracy of our sense-making, and the ability to take meaningful and effective action.
Provided they have been successful at closing gap #1, this role seems tailor made for DEI leaders who uphold a vision of an equitable, inclusive, and sustainable future in which diversity is protected and leveraged, so that all can thrive.
Gap #3: Underestimating the need for change leadership [8]
Innovation, particularly in the form of new technologies and new tools, ushers in change. In our focus on the features and functionalities of new technology, we tend to underestimate the social and cultural repercussions. Most often, the organizations that generate the innovations and monetize them also externalize or disavow responsibility for any negative consequences. Social media companies, for example, distance themselves from the harmful psychological, social, or political effects of social media use, but are unashamed to profit from them.
The harmful impacts of technology may be exacerbated in a digital and globally interconnected world, where the rate of change outpaces the capacity of our psychological, social and cultural systems to absorb, embed, integrate and leverage new technologies in service of people and humanity at large. Responsible actors and innovators will be required to forecast, mitigate and support the integration of new technologies and capabilities in a way to minimize harm. In addition to pursuing technological innovation for its own sake, these organizations will need to facilitate socio-cultural integration.
Given the focus of DEI leaders and practitioners, inclusive and equitable change enablement may be a key aspect of their contribution to organizations that become more aware of and committed to their role and responsibility in shaping the social and cultural context of humanity as well as the ecosystems on which we all depend. This is where ESG and DEI meet to temper the organizational hubris that takes for granted the right to shape and change both natural and human worlds.
If DEI leaders and practitioners take on the formidable challenge of raising the standards of responsibility that their organizations set and uphold, they are also well advised to include non-human dimensions of diversity into their thinking. In particular, the threat of climate change and the extinction of species as well as the growing awareness of the intelligence and presence of language and consciousness in other animals. Humanity’s ecological embeddedness in general and our ability to respect and represent non-human diversity may become essential to human viability and thriving, and therefore also for the AI we invent and deploy. The work of Melanie Challenger [9] and also of Lisz Hirn [10] are particularly compelling and relevant here.
WHAT DOES ALL THIS MEAN FOR DEI PROFESSIONALS?
The opportunities and gaps highlighted here suggest that now is the time for DEI professionals to look ahead and lift their thinking and ambition higher than merely re-tooling themselves to understand AI and ask better questions.
Their gaze needs to include the entire ecosystem and systemic interconnectedness - putting all aspects of the human and social experience into view. DEI professionals should be steadfast in their vision of an equitable, inclusive, and sustainable future in which diversity is protected and leveraged, so that all stakeholders can thrive.
For all of us with affinity for DEI, this is also a call to evolve the ambition and aspiration of DEI beyond fixed and/or narrowly conceived and de-contextualized notions of diversity, equity and inclusiveness.
This is also where AI might help evolve and innovate the practice of DEI significantly. Just imagine, for example, that AI would map the real time patterns of psychological safety and belonging uncertainty across a globally diverse employee base and predict its consequences. We may be able to revise the deductive logic that underpins so much of the DEI practice, namely using specific and statically conceived social dimensions/ categories of difference (or identities) as predictive of and explanations for, of specific behaviors, sensitivities, and decisions.
What would we do, for example, if we found empirical and algorithmic evidence that such experiences cut across very different demographic and/or experiential lines, accounting not just for intersectionality of our senses of self, but perhaps surfacing very different aspects of social experience and cognition? What would this do to our ability to take action and improve the subjective experience across our employees, constituents or stakeholders? What if AI could give us a real time picture of the degree and pattern of empathy, care and connectedness (resonance vs. dissonance) leaders demonstrate across their interactions? Could we link these insights to performance evaluations or talent decisions to refine and personalize our understanding of bias? Could this help us better develop and select inclusive leaders?
Would we be able to realistically model the economic, social and environmental impact of business decisions and strategies on given communities and stakeholders? Would this account for the interconnected, increasingly AI-driven projections and decisions of our competitors and other institutional and governmental actors? Would we realistically factor in the existing interests, inequities, and power differentials? Would we include variables of relational health and human motivation, such as equity and fairness and the role of status insecurity and power differentials in our individual and collective decision-making and institutional design? How would equity and fairness extend to the non-human stakeholders of AI-optimized human decision making?
These are just examples of the type of questions DEI professionals may ask from the vantage point of their lifted gaze. And, in pursuit of answers, the DEI professional may be perfectly suited to catalyze and structure collaborations among diverse sets of stake- and knowledge holders to inform the ethical sustainability of the AI-powered standards to which we hold humans.
What if the challenge before DEI Professionals is to extrapolate the ethical principles of DEI and uphold and advocate them as they influence decision-makers in the development of AI systems? And, what if DEI Professionals leveraged AI tools to answer the questions imagined above (and many more)?
In that sense, I more than agree with Dr. Anand that AI will not replace DEI professionals. I would argue that AI makes DEI professionals indispensable to the responsible and ethically sustainable development of AI systems in service of human thriving. Beyond re-tooling, it might require a critical, creative, and strategic rethinking of DEI and its role in an AI-powered future; as well as savvy DEI leaders who can overcome the three awareness gaps noted above.
- Joerg Schmitz is the Managing Director of the Inclusive Leadership Institute
[1] https://www.britannica.com/technology/artificial-intelligence
[2] James Manyika, Jake Silberg, and Brittany Presten (2019) What Do We Do About the Biases in AI? Harvard Business Review, October 2019
[3] James Manyika, Jake Silberg, and Brittany Presten (2019) What Do We Do About the Biases in AI? Harvard Business Review, October 2019
[4]https://www.linkedin.com/pulse/ai-implications-diversity-equity-inclusion-dei-rohini-anand-phd/?trackingId=aGql6IMzQDiLEQPikA37LA%3D%3D
[5] See, for example, Stuart Jonathan Russell’ interview with Sean Illing in The Grey Area - episode “Should we press pause on AI?”
[6] See, for reference, Jason Hickel (2020) Less is More, Penguin (ISBN: 9781786091215)
[7] Friedrichs, Jörg (2017), The Future is not what it used to be, MIT Press (ISBN: 9780262533652)
[8] Change leadership is different from change management.
[9] See for reference Melanie Challenger (2023) Animal DIgnity, Bloomsbury Publishing (ISBN: 9781350331693) or her
article Animals in the Room at https://emergencemagazine.org/essay/animals-in-the-room/
[10] Lisz Hirn (2023) Der überschätzte Mensch, Paul Zsolnay Verlag