The Dangers of Relying on AI to Hire Private Company Board Members
Artificial Intelligence (AI) is transforming the landscape of recruitment across various sectors, including the selection of members for both Advisory and Governance Boards in private companies. The promise of AI-driven hiring solutions lies in their potential to streamline processes, reduce costs, and enhance decision-making through advanced data analysis.
Nearly every CEO (95%) in a recent EY survey said that they plan to maintain or accelerate transformation initiatives, including artificial intelligence (AI) and other technologies, in 2024. Meanwhile, institutional investors see responsible AI as an emerging engagement priority, and it’s no surprise that directors rank innovation and evolving technologies as a top priority in 2024.
Harvard Law School Forum on Corporate Governance, May 16, 2024
Indeed, whether Generative and traditional AI is being utilized to craft job descriptions, research and source potential candidates, or screening individuals for technical and cultural fit, there’s no doubt this technology can help to streamline more manual tasks.
However, the application of AI in hiring for Governance and Advisory Boards also comes with significant risks and challenges, including unconscious bias, affinity bias whereby the AI-trained resume screening program gives more weight to candidates from the same universities or corporations as the hiring manager, under-represented data sets, and the absence of any gray area related to experience. Each of these pros and cons highlight the necessity of careful consideration and human oversight for such high-stakes appointments.
Perpetuation of Bias and Inequality
AI systems learn from historical data, which can often contain entrenched biases. When it comes to Advisory and Governance Boards, this issue is particularly pronounced. Historical data on board composition often reflect longstanding biases related to gender, race, socioeconomic background, and educational pedigree. If these biases are embedded in the data used to train AI systems, the AI will most likely reinforce and perpetuate them, resulting in a lack of diversity and inclusivity among boards.
A study conducted by The Urban Institute raised basic questions about the ability of many boards to truly represent and respond to the diversity of the public they serve. On average, 86 percent of board members are white, or non-Hispanic; 7 percent are African-American or black; and 3.5 percent are Hispanic/Latino (the balance is from other ethnic groups). Medians convey even greater homogeneity— 96 percent for white members and zero for African Americans and Hispanics. Fifty-one percent of nonprofit boards are composed solely of white, non-Hispanic members. A minority of nonprofits say that ethnic or racial diversity is a somewhat important (25 percent) or very important recruitment (10 percent) criterion. Boards of smaller nonprofits are more likely to be predominantly white.
A diverse governance board is crucial for presenting varied perspectives and fostering truly innovative decision-making. Moreover, board members who hail from the same school or corporation are less likely to hold their peers accountable. Overlooking qualified candidates from underrepresented groups due to poorly trained, biased AI algorithms can limit the board’s effectiveness, ultimately damaging the company’s reputation and ability to function.
Opacity and Accountability Issues
AI algorithms, often described as "black boxes," operate with a level of complexity that can obscure their decision-making processes. This opacity can be particularly problematic in the context of selecting Governance and Advisory Board members, where accountability and transparency are paramount. If the AI programming used in recruiting is overly intricate or inexplicable to the typical board member, the resulting recommendations for board appointments cannot be accurately remedied. This complexity may ultimately result in a board that is ill-equipped to effectively question the source of the hiring decision, absent the rationale behind these decisions.
Moreover, this lack of transparency can erode trust among stakeholders and make it difficult to address concerns about fairness, inclusivity and discrimination. Ensuring that AI-driven decisions are transparent and explainable is essential for maintaining stakeholder confidence and upholding ethical standards.
Overreliance on Quantitative Metrics
AI systems excel at processing and analyzing large volumes of quantitative data, but Governance and Advisory Board appointments often hinge on qualitative factors such as leadership ability, strategic vision, a willingness to hold leaders accountable and interpersonal skills. These attributes are not easily quantifiable, nor are they evident in a Board Bio, and may be overlooked by AI systems that place too much weight on measurable and numeric criteria.
An overreliance on AI could result in the selection of candidates who excel on paper but lack the nuanced qualities necessary for effective governance. Human judgment is indispensable in evaluating these intangible aspects, and a balanced approach that incorporates both AI insights and human evaluation is crucial.
Although technology has automated many aspects of the recruitment process, such as screening resumes and scheduling interviews, it can never replace the level of insight that a human recruiter can bring to the table. Technology-based tools are useful in assessing a candidate’s basic skills, but they cannot assess softer skills like communication and teamwork. So, while AI and automation can certainly help a recruiter in the process, it is ultimately up to them to ensure they are selecting the right candidate based on their unique characteristics.
Steve Forbes, Forbes Contributor
Privacy and Security Concerns
The use of AI in hiring for Advisory and Governance Boards involves the collection and analysis of extensive personal data, raising significant privacy and security issues. Board candidates may be subject to detailed scrutiny, including their professional histories, personal backgrounds, and digital footprints. The handling of such sensitive information must be conducted with the utmost care to protect privacy and prevent data breaches.
Companies must adhere to strict data protection regulations and implement robust cybersecurity measures to safeguard candidates' information. Transparent communication about data usage and obtaining explicit consent from candidates are also vital to addressing privacy concerns. Additionally, without a comprehensive policy outlining when generative AI can and should be utilized in the recruiting process, a candidate, departmental or company-wide personal information risks being compromised.
Ethical and Governance Implications
AI’s role in these board appointments raises profound ethical questions. The impersonal nature of AI-driven decisions can lead to ethical dilemmas, such as reducing the human element in a process that significantly impacts the company’s direction and culture. Moreover, the reliance on AI might be perceived as abdicating responsibility by the Executive Leadership Team, which is particularly concerning in the context of governance, where ethical leadership and accountability are critical.
Companies must carefully consider the ethical implications of using AI in this context and ensure that their approach aligns with their values and ethical standards. Maintaining a human-centric process that respects the dignity and individuality of candidates is essential and should not be relegated to AI alone.
The integration of AI into the hiring process for Advisory and Governance Boards in private companies presents both opportunities for enhanced efficiencies, as well as significant dangers. A hybrid approach that combines the strengths of AI with the discernment and judgment of human evaluators can help mitigate these risks. By doing so, companies can ensure a fair, transparent, and ethical selection process for board appointments, ultimately fostering more effective and diverse leadership.
ABOUT KEN SCHMITT
Ken Schmitt founded TurningPoint Executive Search in 2007 and has been recruiting for over 28 years. Throughout his career with global firms such as Heidrick & Struggles, and large regional firms such as Accountants Inc. Ken has placed over 1,000 professionals across industries and functions in the mid-market, enterprise and SMB space. In 2023 he wrote and published “The Practical Optimist”, and since January 2023 he’s served as the host for the podcast “Hiring Matters”.
Disclaimer: The views and opinions expressed in this blog are solely those of the authors providing them and do not necessarily reflect the views or positions of the Private Directors Association, its members, affiliates, or employees.