The AI Productivity Boom Feels Risky — But It Might Be Your Invitation to Experiment
From Pink Slip Anxiety to Reclaiming Your Experimentation Edge
The Connecting Point Essay | Words: 1,482 | Reading time: ~7 minutes
There’s a strange feeling rippling through offices right now. Call it unease. Or maybe a quiet dread.
It’s not that people aren’t excited about AI — they are. It’s powerful, fascinating, sometimes even fun. But beneath the enthusiasm, a quiet suspicion is building:
Is this innovation or just a sleeker version of downsizing?
One example recently caught my eye. Cisco — a company I’ve worked with as a consultant over the years — is experimenting with a 50/50 reinvestment model: teams that generate savings through AI can keep half and reinvest it in training, hiring, or better tools. In an interview with Charter, Francine Katsoudas, Cisco’s Chief People, Policy & Purpose Officer, described it as a way to spark grassroots innovation.
It’s a bold idea. A hopeful one.
But the more I thought about it, the more I wondered: What’s really happening behind all the headlines about AI and productivity?
The Promise: Empowerment and Upskilling
Let’s start with the optimistic version. Many companies are doing genuinely interesting things with AI.
Some are considering ways for teams to reinvest savings from automation. Pinterest hosts internal hackathons where employees build AI tools to solve real workplace problems — including one that now handles 4,000 employee queries a month. Amazon has poured over a billion dollars into upskilling its workforce. Salesforce is offering free AI courses and certifications to anyone willing to learn.
Across the board, companies are experimenting, skilling up, and saying all the right things about preparing their people for the future.
This is the narrative we want to believe in. That AI isn’t about replacement. It’s about reinvention. That we can do less drudgery, more meaningful work. That technology, properly implemented, can humanize — not hollow out — the workplace.
And yet.
The Discomfort: Innovation Meets Insecurity
When you zoom out, the pattern gets more uncomfortable. Many of the same companies investing in AI are also quietly reducing their workforces.
Klarna embraced AI across customer service, even debuting an AI-generated avatar of its CEO to announce earnings. It has also cut 40% of its workforce since 2022.
JPMorgan Chase developed over 100 AI tools, reducing servicing costs by 30% and projecting a 10% cut in operational headcount. Then it laid off 1,000 employees in early 2025.
Amazon? They’re integrating AI deeply, and at the same time, conducting layoffs across AWS, advertising, and Prime Video. Engineers report having to do more with fewer teammates and higher expectations.
Even companies like Salesforce, PwC, and Pinterest, each touting AI-driven transformation, have executed significant job cuts in parallel.
The story becomes clear: AI isn’t replacing people yet. But it is being used to justify leaner teams. And for the average worker, that sounds a lot like, "Here’s a new tool to make your job easier. Also, your team just got cut in half. Good luck."
Reinvestment Models: Empowerment or Elimination?
Give teams ownership. Let them share in the upside. Encourage experimentation. Incentivize initiative. The idea behind reinvestment models is simple: if teams can drive savings through AI, they should benefit from those gains. It's a strategy designed to foster a culture of proactive adaptation.
But in the AI productivity story, what goes unsaid often speaks the loudest.
AI doesn’t just optimize performance — it becomes a scoreboard. One that quietly tracks who adapts, who saves, who reinvests. Over time, that data starts to reshape internal narratives: about value, about viability, about who’s moving forward and who’s falling behind.
It also subtly shifts hiring logic. Managers are nudged to ask: Do we even need to refill this role, or can automation cover it? If a team isn’t producing measurable efficiency gains, are they less aligned with the company’s future?
None of this is inherently malicious. It’s strategic. But let’s not kid ourselves. It’s a soft sorting mechanism. A cultural signal.
A carrot with a shadow.
The Real Cost: Trust
The paradox is this: AI is being framed as a way to make work better — more creative, more efficient, more human. But for many employees, it’s landing as the opposite: more pressure, more uncertainty, more risk.
The message is mixed: Upskill! But the layoffs keep coming. Innovate! But your team just got smaller. Own your future! But only if your metrics justify it.
That kind of contradiction doesn’t inspire transformation — it breeds skepticism. And skepticism is innovation’s worst enemy, because it feeds fear.
When people don’t feel safe, they can’t be bold. They don’t explore. They conserve energy. They try to survive.
Employees don’t just need direction. They need clarity — about what tools to learn, yes, but also about what the future of their role — their relevance and value — actually looks like.
Without that, what you get isn’t adoption. It’s disengagement:
People go through the motions.
They tick the training boxes.
But they’re not excited, they’re guarded.
They aren’t resisting AI. They’re just unsure if they’re safe enough to take risks.
To experiment. To open themselves up to potential setbacks and failed attempts.
And beneath the surface, many aren’t halfway out the door. They’re halfway into survival mode — afraid to leave, but just as afraid to fully lean in.
And that quiet fear, left unaddressed, will cost companies far more than any AI tool can save.
But if fear is the cost of inaction, then clarity must be our catalyst. And that clarity starts with a shift from waiting on strategy to creating it ourselves.
Find Your What & Why – Become Part of the Experiment
AI is changing the landscape of work — fast. And while some of that change feels imposed from the top down, not all of it is out of your hands. You may not control every decision that comes out of the executive suite, but you’re far from powerless. The challenge now is to move from unease to agency.
That starts with a question: What can you reinvest? And why does it matter?
Take Cisco’s 50/50 model as a signal of what’s possible: teams that generate savings through AI can keep half and channel it into smarter hiring, better tools, or upskilling. It’s a radical shift—a company saying, “We trust you to know where to invest.” That kind of grassroots innovation isn't just about saving money; it’s about reclaiming influence.
You can wait for permission. Or you can prototype your own version of the experiment.
Strategic: What process is draining your team’s time right now? Where could automation give you back hours—and how could you reinvest that time in a way that moves the needle?
Surgical: You don’t need a department-wide overhaul. Start with a sliver. One workflow. One pilot. One small win.
Scrappy: Use what you have. The best experiments aren’t polished—they’re proven. Start ugly. Improve in motion.
It’s easy to view AI as a tide that either lifts you or sweeps you away. But this is your moment to choose how you ride it. Identify the value only humans can deliver —creativity, empathy, judgment — and build around that. Anchor your “what” in a purpose that’s real to your team, and your “why” in a future you actually want to work in.
This isn't about being naive. It’s about being nimble. The future of work isn’t going to be handed down — it’s going to be hacked together by people who see opportunity in the gaps.
Find your edge. Define your experiment. Then go build the case for keeping it.
Rebuilding the Social Contract
This moment demands more than new models. It demands new honesty.
We have to ask:
What does a company owe its people during transformation?
Can we have innovation without disposability?
Are we building tools to empower or just to replace?
If we want AI to be a leap forward, not a layoff machine, we need to treat it as a human transformation, not just a technical one. That means bringing employees into the conversation. Sharing not just strategy, but intent. Recommitting to transparency and fairness, even when the numbers are pressuring you otherwise.
The companies that get this right won’t just thrive in the AI era. They’ll build cultures people want to stay in — and fight for.
The Connecting Point
If you're navigating this tension — as an employee, a leader, or both — you’re not alone. Many are asking the same quiet questions.
How is your company handling this shift? Where are they getting it right? Where are they falling short? And what would you do differently?
In my Deep Dive essay for premium subscribers later this month, we’ll explore the hidden pressure points inside today’s AI productivity push and share practical ways to shift the culture before the metrics take over.
We’ll lay the groundwork for spotting smarter opportunities, but more importantly: we’ll start by protecting the people expected to deliver them.
AI may be inevitable. But how we roll it out? That part is still up to us.
The problem is that if employees don't start embracing these tools, they're going to find themselves in difficult positions when mandates come down.
Organizations that don't move in a more innovative direction are going to find themselves utterly and completely stuck, so far in the past they're going to struggle. I'm really curious how institutions like higher education, K-12, and other organizations that are so slow to adopt anything is going to adapt to this, especially education because it's not just the employee's ability to use it, it's how are they going to teach students?
Because entire curriculums are going to need to change to stay relevant. And in higher ed, where many times faculty hold an enormous amount of power, where administrators can't force them to do anything, and tenure protects them to a large degree. There is no motivation for them to adapt to anything.
This is brilliant and incredibly timely. You’ve articulated precisely what’s been bothering me deeply as someone exploring how organizations can genuinely embrace AI. As I’ve shared in my recent manifesto, ‘Augmentation: The New Strategic Frontier’, AI-driven transformation isn’t primarily about tools or efficiency, it’s about mindset, purpose, and how genuinely safe our people feel to explore and experiment.
Your point about the hidden tensions and unspoken pressures that teams face rings particularly true. Employees won’t truly innovate or lean into AI if they’re uncertain or anxious about their role or future. Creating psychological safety, providing clear pathways, and nurturing human-centered cultures are not optional; they’re fundamental.
Thank you for spotlighting the quiet fears beneath the AI hype and for encouraging us not just to acknowledge the human cost, but to actively design for human flourishing as part of our AI strategy. I’m excited to join you in this important conversation.
https://aihumanity.substack.com/p/augmentation-the-new-strategic-frontier?r=supoi