As the capabilities of artificial intelligence continue to accelerate, so do the conversations around its role in learning and development. That’s why the Human Capital Lab recently convened a second Talent Development Community of Practice session focused on AI’s evolving impact on our field—this time bringing a global lens to the discussion.
Led by Valerie Williams-Foy, head of leadership development at Imperial College London, the session sparked an honest, wide-ranging dialogue among L&D professionals from across the globe. The focus? How to navigate the promise and pitfalls of AI while keeping people at the center.
Key takeaways:
- “Humans with AI will replace humans without AI.” This quote from Harvard Business School kicked off the session—and it set the tone. Participants explored how AI can supercharge creativity, reduce manual effort and expand what’s possible—but only when used intentionally and ethically.
- AI excels at structure but struggles with soul. Generative AI can synthesize survey data, streamline follow-ups and crank out competency-based interview questions in minutes. But as several participants noted, it still lacks discernment and the human spark—what one called the “soul” of good communication.
- The ethical questions are only getting louder. From bias in hiring tools to blurred lines around authorship, ethical tensions are surfacing fast. Organizations are racing to implement guardrails—some relying only on secure enterprise tools like Microsoft Copilot and limiting AI exposure to vetted, internal data.
- Critical thinking must stay in the loop. A recurring concern: will over-reliance on AI diminish analytical skills, particularly for early-career professionals? As one participant put it, “We have to make sure AI augments human potential—it can’t overwhelm it.”
- The risk of defaulting to convenience. In a fast-paced world, shortcuts are tempting. Several attendees noted that AI’s “easy button” could lead to a decline in originality, learning and discernment if not checked.
- Real-world examples showed real promise. One state agency is using AI to build 24/7 study bots for engineering certification exams, turning internal content into personalized, always-on tutors. Another organization has implemented responsible AI-use policies to guide employee behavior and reduce risks.
- A call for human-centered performance evaluation. With AI influencing more deliverables, participants discussed how to evaluate performance and potential in ways that recognize both human contributions and responsible tech use—especially in early-career and support-level roles.
Bottom line: AI is no longer theoretical in talent development—it’s operational. But it’s still our responsibility to guide it with judgment, ethics and humanity. As one speaker noted, “AI should enhance what we do, not excuse us from doing it.”