AI Ikigai – Chapter 3: The Human Operating System — People, Skills, and Culture in the Age of AI

Tech Writers at PF

AI Ikigai – Chapter 3: The Human Operating System — People, Skills, and Culture in the Age of AI

In every AI conversation I’ve had over the years — from Microsoft’s engineering floors to Amazon’s AI council chambers, and now at Property Finder — the same theme quietly surfaces: “How do we get our people ready for this?”

And I get it.

For all the noise about models, GPU clusters, and generative breakthroughs, the truth is more grounded: AI transformation is 10% tech and 90% people.

Let’s be honest — if you’re a CXO today, you’re not just trying to catch up to AI. You’re also untangling the last two waves of transformation you were promised — digital migration, cloud modernization, low-code tooling — and trying to do it while holding onto your team’s morale, delivering quarterly results, and fending off the latest management buzzwords (remember when we were all supposed to be in the metaverse?).

So let’s strip the hype. This chapter is about what it really takes — structurally, culturally, and humanly — to make AI work inside organizations that weren’t born with it.

Why Talent — Not Tools — Is the Bottleneck

Most executives now recognize that hiring a Chief AI Officer or buying a flashy vendor solution is not a strategy. And yet, 85% of AI projects still fail to deliver meaningful outcomes. We love to blame infrastructure, data, or algorithms. But what I’ve seen, over and over, is this: teams don’t break because of technology. They break because people weren’t ready, or weren’t aligned.

At LinkedIn, where I witnessed the teams working on the content and skills recommendation layer, our models weren’t the problem. Our bottlenecks came from knowledge silos, misaligned incentives, and culture inertia that kept teams clinging to familiar playbooks. At Visible (Verizon’s digital-only telco), our support systems could handle 70% of tickets — but the real magic happened only when we retrained internal teams to trust those tools, and redesigned incentives to reward usage, not bypassing. When we made it #HumanPoweredAI then only it started showing the true impact. It allowed us to restructure our entire CX Team to fit where we needed the most “human touch” with our BlueGlove program, eventually cutting our costs down to 73%.

You can buy the world’s best LLM tomorrow. But if your teams don’t know how to wield it, you’re just lighting a fire no one knows how to tend.

Closing the Talent Gap — Build or Buy? (It’s Both)

Let’s start with talent. It’s tempting to believe we can solve this with a few high-profile hires. Hire three data scientists, a machine learning lead, maybe a Stanford intern or two, and you’re good to go. Right?

Not even close.

Top-tier AI talent is still scarce. And expensive. According to McKinsey, over 40% of companies expect to reskill at least 20% of their workforce due to AI. That’s not a fluke — it’s a reflection of just how much legacy knowledge must evolve to unlock modern AI systems.

At AT&T, they saw the writing on the wall early. Their $1B “Future Ready” initiative wasn’t about moonshots — it was a bet on people. They upskilled tens of thousands of employees through partnerships with online learning platforms and universities. The result? A workforce that could grow into their AI strategy, not one they had to constantly replace.

At Property Finder, we’ve taken the same long view. Through Project Cinco de Mayo, we’re teaching engineers how to build with AI — not just use it. Prompt engineering, model review, bias understanding, and explainability aren’t left to one “AI squad” — they’re shared skills. We don’t want bystanders. We want builders who can audit, shape, and trust what they deploy.

That said, strategic hiring still matters. For foundational capabilities — like building out our retrieval-augmented generation (RAG) pipelines or creating our own agent-listing matching framework — we’ve brought in leaders from the Valley, Toronto, and yes local talent as well. But always with a focus on knowledge transfer, not permanent dependency.

Org Models That Actually Work

Now let’s talk structure. One of the most common questions I get is: Should we centralize AI or embed it in business units?

The honest answer? It depends — but start centralized, then federate with discipline.

Early in an AI journey, centralization is key. You need a critical mass of expertise, shared tooling, standards, and a clear governance model. Otherwise, you end up with 12 different AI projects across departments, all using slightly different data, built in slightly different ways, with no shared learning or controls.

But over time, the magic happens when AI becomes embedded in the business. At Amazon, we called it “AI-as-infra” — it wasn’t a team you talked to, it was a capability you leaned on. That’s the goal.

The most effective pattern I’ve seen is the “hub-and-spoke” model:

  • A central AI/ML platform team maintains infrastructure, guardrails, and shared assets.
  • Business units embed AI translators — people who deeply understand domain problems and know how to work with data scientists.
  • Cross-functional squads (data science, product, engineering, UX) own end-to-end delivery of AI features.

At Property Finder, this structure allowed us to move from experimentation to production-grade AI — like our life-stage-aware search flows, or our contextual agent match scoring — without reinventing the wheel each time.

One potential pitfall to avoid will be to appoint a team to lead this effort. while it is important to centralize the facilitation, avoid creating superheroes. #SuperHeroesDontScale.

Why “AI Culture” Isn’t Optional

If talent is the engine and structure is the chassis, culture is the fuel.

And here’s the hard truth: you can’t “roll out” AI culture with a newsletter or a few lunch-and-learns. You have to earn it.

That means:

  • Transparency: If AI is recommending actions to users or staff, show why. “Because the model said so” won’t fly in high-stakes domains like real estate or finance. We use explainability layers — think “top 3 reasons this agent is a fit” — not just model outputs.
  • Psychological safety: Your teams need to know they can experiment with AI — and fail — without punishment. At WeWork, when we tried automating occupancy predictions, early models were way off. But because we treated it like a learning loop, not a failure, we eventually built a much stronger system. Celebrate learnings, not just launches.
  • Executive modeling: If your senior leaders still make decisions by instinct and Excel while preaching AI-first, it won’t stick. At Property Finder, I show up to ELT meetings with AI dashboards — not because they’re perfect, but because behavior sets tone.

Overcoming the Real Resistance

The most common resistance to AI isn’t about fear of being replaced.

It’s about fear of being redefined.

Employees wonder:

  • “Will I still be valuable if this model does 60% of what I used to do?”
  • “Will I be judged for not using AI?”
  • “Will this tool make decisions I don’t understand — but I’m accountable for?”

These are human concerns. And we treat them as such.

At PF, we start every AI rollout with two principles:

  1. AI should augment, not displace. If a CX agent now spends 10 minutes solving something AI could solve in 2 — our goal is not to replace them. It’s to let them focus on the tougher issues.
  2. People need to see the proof themselves. We don’t mandate adoption. We show how AI saved another team time, improved accuracy, or reduced complaints. Nothing builds trust like results.

The Path Forward — A Culture That Learns, Together

If there’s one thing I’d urge every CXO to do right now, it’s this:

Make learning part of your AI plan.

This doesn’t mean sending everyone to take a Coursera course on machine learning. It means creating the conditions for people to grow with the change.

  • Train line managers to ask, “How can AI help us here?”
  • Create internal office hours for AI questions.
  • Recognize experimentation — even if it fails.

At Property Finder, we’re working on our “Internal AI-powered Retooling Roadmap” — not to replace people but to replace the repetetive tasks they do.


Key Takeaways

  • You don’t scale AI without scaling people. Hire what you must, but invest in upskilling broadly.
  • Structure matters. Centralize early, federate with care, and embed translators to bridge tech and business.
  • Culture is the multiplier. Trust, safety, and transparency are prerequisites for real adoption.
  • Change is emotional. Address fears head-on, and let results speak louder than mandates.
  • Your org is your AI model. If it’s brittle, biased, or opaque — your AI will be too.