Skip to main content

AI Adoption: More Than a Checkbox

Lately, I’ve been thinking a lot about what it really means to adopt AI at the team level.

It reminds me of something I’ve lived through before: the early days of Agile transformation.

I remember when we first "went Agile." We swapped out Gantt charts for sticky notes, added standups to the calendar, renamed our planning meetings—and called it a day.

It looked Agile. But it wasn’t. It took months of unlearning, coaching, retrospectives, and iteration before we saw real behavior change—and real outcomes.

And I’m seeing the same thing with AI now.

A leadership team announces, “We’re investing in AI,” but at the team level, it’s unclear what that actually means. How does it show up in our planning? Our rituals? Our outcomes?

What Meaningful AI Adoption Actually Looks Like

From what I’ve seen, meaningful AI adoption doesn’t start with a shiny tool or a prompt engineer. It starts with visible improvements in how teams work. Are decisions being made faster? Is output improving? Are we saving meaningful time? These are your first signals.

But adoption is more than speed—it’s about behavior. Teams that truly integrate AI start working differently. They update their rituals, reframe their planning, and adjust their goals. In the most promising teams, AI isn’t just a one-off experiment—it shows up in the OKRs.

Some of the most powerful indicators, though, are less obvious. Curious teams experiment openly, sharing both wins and stumbles. You start to see a shift away from repetitive tasks and toward more creative, strategic work. And if a tool introduced three months ago is still in use, still improving—that’s a real sign of traction.

Drawing Parallels from Agile

Think back to why so many tech teams embraced Agile. It wasn’t just to go faster—it was about being smarter with our time. Agile promised more than speed; it promised learning, responsiveness, and reducing waste by shipping the right things sooner.

At first, Agile felt clunky. Standups were awkward. Sprint planning forced uncomfortable trade-offs. Retrospectives weren’t second nature yet. But with time, those rituals became the scaffolding for better decisions and stronger alignment.

That same trajectory exists with AI. It’s not about installing tools—it’s about enabling teams to act faster, learn faster, and adapt faster. It’s the same goal, just with new capabilities.

Why Speed of Value Matters in AI Adoption

Here’s the heart of it: AI isn’t the goal. The goal is value—and delivering it faster.

Done right, AI compresses the timeline between intention and impact. Teams move from question to insight in minutes. Ideas become prototypes in hours, not weeks. And a user’s pain point becomes a fix with measurable benefit—not just a ticket in a backlog.

Internally, this shows up as faster market research, quicker roadmap planning, and more efficient user story writing. Externally, it shows up in improved user onboarding, faster support resolution, and features that truly resonate—because they were informed by insights, not assumptions.

Just like Agile, AI helps us do more than move fast. It helps us move with purpose.

How to Start: A Tiered Approach to AI Adoption

If you're a PM or team lead wondering how to move the needle without overhauling everything, here’s one way to think about it—broken down by level of effort and time horizon.

 

Short-Term (Next 30 Days)

Think of this as your “sticky note” phase. You’re not transforming the company—you’re creating tiny openings where curiosity and experimentation can grow.

  • Start a #daily-ai-wins Slack channel
    Encourage team members to share AI-powered wins—big or small. One team I know surfaced a Notion automation that saved hours of manual note syncing.

  • Host an AI curiosity hour
    Friday lunch-and-learns where team members demo their favorite AI tools, like how someone used Claude to write a customer summary.

  • Create a shared prompt library
    Think of it like reusable code snippets—but for writing product specs, support responses, or even changelogs.

 Mid-Term (Next Quarter)

This is when your AI adoption starts looking more like real change—not unlike when your team finally dropped the 2-week waterfall disguised as a sprint and actually embraced iterative delivery.

  • Run a pilot project
    Choose one workflow (e.g., user research synthesis or customer success follow-ups) and introduce AI tooling. Measure outcomes and team sentiment.

  • Tie AI to team OKRs
    Example: “Reduce weekly reporting time by 30% using AI-generated drafts.” One team used GPT-4 to auto-draft QBR templates—cutting prep time in half.

  • Explore build vs. buy
    Use a lightweight RACI or impact-effort matrix to decide where custom models or fine-tuning make sense (vs. off-the-shelf integrations).

Long-Term (Next 6–12 Months)

If short-term wins spark adoption, long-term habits sustain it. In Agile, this is when velocity tracking and empowered squads finally started to feel normal. With AI, it’s when hiring, roles, and rituals start to reflect this new way of working.

  • Upskill intentionally
    Curate AI learning paths for different roles. Think: product managers learning prompt engineering, designers experimenting with genAI visuals.

  • Evolve roles and responsibilities
    Appoint an “AI champion” per squad to guide experiments and track value—without needing to be a data scientist.

  • Make AI part of hiring conversations
    Ask candidates about AI tools they’ve used, or how they’d apply LLMs to reduce customer friction. It signals where you’re headed.


What Slows Teams Down

Even well-intentioned teams face friction. Here’s what I’ve seen derail momentum:

Some teams equate using ChatGPT once with "doing AI." Just like calling a meeting a "standup" didn’t make us Agile, surface-level engagement doesn’t equal adoption. AI must be embedded in how you work—not just tacked on.

Another trap? Treating AI like a side project. If no one owns it, it doesn’t evolve. It becomes a forgotten experiment.

Tool sprawl is another problem. When teams jump from tool to tool without anchoring use cases, AI becomes just another layer of noise.

And don’t underestimate fear. Teams worry about using the wrong prompt, exposing data, or simply "doing it wrong." Normalize learning. Create a space where trial and error is expected.
A Note on Privacy and Security


As teams explore AI adoption, there’s one critical consideration that’s often overlooked in the excitement: data privacy and information security.

If you wouldn’t want the information to become publicly available on the internet, think twice before pasting it into an AI tool.

That may sound like an exaggeration, but it’s a helpful rule of thumb when working with sensitive content. Whether it’s customer data, proprietary roadmap details, or internal documents, once it’s shared with external tools—especially those not hosted internally—you may lose control over where it goes or how it’s used.

Here are a few lightweight practices I’ve found helpful to share with teams:

  • Use synthetic or redacted data when testing AI features

  • Clarify tool policies early: Can your company’s data be shared with ChatGPT, Claude, or other tools?

  • Ask legal/security to weigh in before integrating any tool into workflows

Adoption doesn’t mean recklessness. Treat AI tools like any third-party SaaS product—one with potentially much deeper access to your information.



Learning in Public

I’m not claiming to have it all figured out—far from it. I’m still exploring where AI fits into my own product toolkit. I’ve seen pieces of it through past projects—from chatbots to predictive pricing to content recommendations—but now I’m focused on embedding that learning into how teams work, not just what tools they use.

If you’re a PM or team lead navigating this space too, I’d love to hear from you.

One small nudge:

Start with one of the short-term rituals this week. Try a 15-minute AI demo during your next team meeting. Then ask: Where else could this help us?

And if you’ve already started—what’s working for you? What’s not? Let’s learn in public.

Comments

Popular posts from this blog

The Mom Test: What Every Product Manager Needs to Rethink About User Interviews

Curiosity is one of the most underrated skills in product management. It’s what keeps us learning, evolving, and digging deeper into what our users actually need—rather than what they say they want. That’s why I try to keep a steady rhythm of reading, especially books that challenge how I think and work. The Mom Test by Rob Fitzpatrick recently made its way to the top of my list—and I’m glad it did. It’s a short, punchy read (think 1–2 evenings) but packed with insights that changed how I approach early-stage user conversations. What struck me most is that the book clearly explains ideas I had to learn the hard way through trial and error . If I’d read this earlier in my career, I would’ve saved so much time—and probably had better conversations from day one. If you’re building something new, validating a problem space, or just want to stop getting false positives from polite people—read this book. Key Takeaways from The Mom Test 🧠 Talk About Their Life, Not Your Idea Rather than pit...

Unlocking GPT-4.1’s Full Potential: A PM’s Guide to Smarter Prompting

  If you’ve ever stared at a blank prompt and thought, “I know GPT can do more, but I’m not sure how to ask for it” —you’re not alone. As product managers, we’re always hunting for ways to bring AI into our roadmaps in ways that are both powerful and pragmatic. The new GPT-4.1 Prompting Guide from OpenAI is a goldmine for that. It’s full of tactics to make GPT agents smarter, more persistent, and easier to work with—especially if you’re building agentic workflows or tool-integrated systems. But even if you’re not building autonomous agents, there’s a ton here to help any PM get better results from AI, faster. Below, I’ll distill the key takeaways, offer a few critiques from a product lens, and share practical next steps for PMs who are early in their AI adoption journey. What’s in the Guide: Key Takeaways 1. GPT-4.1 Is Ultra-Steerable Unlike older models that “guessed” intent, GPT-4.1 thrives on explicit instructions. It rewards clarity. One well-placed sentence mid-prompt can cou...