The AI skills gap is here — and bridging it is a social responsibility
Critical skills shortages have plagued every iteration of the knowledge economy. From punch cards to coding and now artificial intelligence (AI), innovation has generally moved faster than the education and training needed to support it. And in today’s job market, the result has created a widening gap between industry demands and the skilled professionals required to address them.
The skills gap has also led to an opportunity gap — an expanding gulf between the increasingly diverse range of people seeking to participate in the global economy and their ability to contribute as they build meaningful careers. This is especially true when it comes to AI skills and the economic progress they enable. As AI technologies advance by leaps and bounds, people from across geographies, generations and socioeconomic classes risk falling behind.
As participants in the Atlantik-Brücke New Bridge Program — a transatlantic fellowship for young professionals from Germany and the U.S. — Kyndryl Foundation Social Impact Director Monoswita Saha and Ciklum Senior Manager & AI expert Enver Cetin found common ground in their approach to managing these issues on both sides of the Atlantic. Here, they share their insights on the cultural and societal impact of AI and the urgency with which everyone must prepare.
AI is such a broad topic that many people often tune out. What are some core concerns or opportunities organizations must focus on?
Saha: There are so many layers to this. Kyndryl recently released a People Readiness Report revealing that 71% of global business leaders say their workforces are not ready to use AI successfully, and that only a handful of companies — about 14% — are upskilling their workforces while deploying AI for commercial use. At Kyndryl, we’re focused on helping our customers modernize their IT systems to make the most of AI. Building and providing AI skills for our customers is part of that effort. As a people-centered firm, we’re also deeply committed to skilling both our own employees and the communities in which we do business. Through the Kyndryl Foundation, we invest in and partner with nonprofits focused on training and awareness of digital skills, including AI, specifically for those who are not well represented in the industry. It’s important that the education and skilling fit the needs of the learner and the sector. These approaches can incorporate career mentoring and professional skills training alongside technical training.
“We need to understand that AI is becoming a foundational tool like computing itself.”
Cetin: A working knowledge of AI and its daily impact by the general population is as important as AI skills for those professionals whose jobs are being transformed by this technology. Awareness leads to understanding, which helps assuage the fears people may have about how AI will impact their lives. We need to understand that AI is becoming a foundational tool like computing itself. So, at the very least, people need to be conversant in AI principles and operations instead of seeing it as mysterious and opaque. This is happening with certain segments of the workforce and in digital-native generations. But there are at least two compelling reasons to spread AI literacy as widely as possible: the AI skills gap is hobbling the progress we need in areas like healthcare and environmental sustainability, and people who lack opportunities to develop AI skills risk being excluded from the rapidly changing global economy.
Are there differences between American and European approaches to AI?
Saha: Enver and I participated in a Future of Work discussion hosted by Atlantik-Brücke in Berlin, where we talked about labor upskilling, cybersecurity, and AI. The event included business leaders, decision-makers as well as policymakers, and aimed to address the implications of technology and conflict in Europe. We could just as easily have held that forum in the U.S.
That said, there are differences in American and EU approaches to regulating AI. A light-touch, pro-innovation approach in the United States and a more heavy-handed regulatory approach in the EU focused on deploying safe and ethical AI. The former enables rapid deployment while the latter focuses first on getting the transparency and accountability right. Nevertheless, both stress the importance of AI skills. According to the EU’s AI Act, providers and deployers of AI systems will have to take measures to ensure a sufficient level of AI literacy for their staff as well as any other person dealing with the AI systems on their behalf. So, the EU is placing a heavy focus on employers to ensure the skills are there.
In the U.S., the Trump Administration’s “AI Action Plan” focuses on expanding AI education and retraining opportunities across the workforce. The plan emphasizes early exposure to AI in K–12, expanded coursework in higher education, and rapid retraining for mid-career workers through registered apprenticeships and workforce programs. It also directs agencies to study AI’s impact on the labor market and encourages employers to offer reimbursement for employees who pursue AI-related training and certifications. While Europe is pursuing a top-down regulatory framework that requires employers to deliver AI literacy, the U.S. is advancing an education model built on voluntary partnerships, apprenticeships, and targeted retraining to build an AI-ready workforce.
Cetin: Thoughtful business leaders and policymakers want to steer the AI ship in the right direction. They certainly want progress, efficiency and cost savings in the commercial space. But they also want effective AI governance, ensuring that humans remain in control. Right now, we’re in the middle of trying to find the right balance between innovation, safety and trust.
What critical elements are needed for a common transatlantic AI framework?
Saha: Whatever we call out here is much easier said than done. From my perspective there are a few governing principles to prioritize:
First, promote human oversight, transparency (including the development, implementation and disclosure of AI governance and risk management policies), and accountability. This includes regular checks for bias. Second, implement cybersecurity standards that enhance security, resilience, and privacy. This includes investing in and implementing robust security controls such as physical security measures, cybersecurity safeguards, and insider threat protections across the AI lifecycle. Third, advance skilling initiatives that enable the current and future workforce to act proactively.
“While Europe and the U.S. differ in their approaches to regulating AI, both sides see eye to eye on this issue.”
It is critical to ensure there are no unjustified digital trade barriers and that data can flow freely between the U.S. and the EU. While Europe and the U.S. differ in their approaches to regulating AI, both sides see eye to eye on this issue, as reflected in the EU-U.S. Joint Statement on reciprocal, fair, and balanced trade signed in August 2025. This is an important first step.
Cetin: A critical component of that education and skilling must address the broader cultural influence of AI. In other words, we need to incorporate cultural sensitivity — including an understanding that cultural change takes time — into any of our approaches to training. Strategy only works when there’s culture behind it. That means tailoring programs to local contexts. This includes designing AI literacy initiatives that respect different attitudes toward technology, embedding ethics discussions that resonate with specific communities, and ensuring that materials are available in the languages people use in their daily lives and at work.
What needs to happen next?
Saha: We need more education and training — not only to meet the growing demand for a talented AI workforce, but also to welcome people with non-traditional educational backgrounds. Four-year degrees are no longer the only path; targeted credentialing and technical programs should be part of an “all of the above” strategy. In the U.S., that means scaling fast, affordable “last-mile” training tied to employer demand and investing in lifelong learning so mid-career professionals can adapt. Both government programs and private sector leadership must extend opportunities beyond degree holders.
Cetin: Building on my previous response, the next step is to translate cultural sensitivity into action. We need to move from awareness to implementation — creating cross-cultural learning environments, shared ethics labs, and exchange programs that bring European and American practitioners together to co-design responsible AI use cases. That’s how we ensure the values we talk about are actually lived in practice.
Enver Cetin is an AI Leader at Ciklum, a global Experience Engineering company helping organizations reimagine how they work through AI, data, and intelligent automation.
Monoswita Saha , a former educator, is Director of Social Impact at Kyndryl, the world’s largest technology infrastructure provider.