What Surprised Me in my AI journey and What I’m Doing Differently

f Parts 1 and 2 were about starting and doing, this one is about recalibrating.

Because here’s the truth: the longer I worked with AI, the more some of my early advice and assumptions fell apart.

That’s not a failure. That’s learning in real time.

“Just Get Started” Was Terrible Advice

I was one of the people who said it.

Don’t know where to start with AI? Just get started.

In theory, it sounds empowering. In practice, it ignores  how overwhelmed people already are and how fast the ground keeps shifting.

What I’d say now:

Don’t know where to start? Make the starting line smaller.

  • Pick one tiny, manual thing you do personally and experiment with ways to automate or simplify it.

  • Or ask different large language models like ChatGPT orGemini to plan a three-day trip to a place you know well.

When you already know the answer, you can judge the output. You learn what’s helpful, what’s wrong, and where human judgment still matters.

The Hype Is Both Overblown and Earned

This one surprised me.

There were moments where I genuinely stopped and thought, Wait… I just built that?

And then there were moments where I couldn’t get a tool to do something painfully basic.

Both things can be true.

AI right now is uneven. Capable in ways that feel magical one minute. Shockingly limited the next.

The mistake is expecting consistency or assuming every demo translates cleanly into real work. Don’t expect the tool to do the work.  Designing the work, deciding where humans add the most value, directing the tool and evaluating the results, that’s the real work.

The opportunity is knowing when it’s extraordinary and when it’s not ready yet.

Mastery Is a Myth (and That’s Okay)

At some point, I stopped aiming for mastery.

Not because I gave up but because it became clear that mastery isn’t the goal anymore.

The technology is changing too fast. The interfaces keep evolving. The capabilities leapfrog each other.

I don’t think I’ll ever “finish learning” these tools.

And I don’t think anyone else will either.

What matters more is:

  • Staying curious

  • Staying adaptable

  • Staying honest about what you don’t know yet

That’s a more durable skill set than expertise frozen in time.

My Best Tool Isn’t a Tool. It’s People

This was the biggest surprise, and maybe the most obvious in hindsight.

The most powerful “AI system” I worked with wasn’t software.

It was:

  • The people who understand the work deeply

  • The ones who stay curious instead of defensive

  • The ones with the grit to keep iterating after something fails

  • The ones who love taking the messy and making it clear and simple.

And just as important:

  • The people who are anxious

  • The people who are skeptical

  • The people quietly wondering what this means for them

The real work isn’t prompt engineering.

It’s change management. I am like the 76% of execs in the HRB research (Leaders Assume Employees Are Excited about AI. They’re Wrong)  that are enthusiastic about AI and I was missing for a while that I was often talking to employees who are not in the 31% of employees who share that enthusiasm

Listening. Translating. Creating safety to experiment without fear of replacement or exposure.

Ignore that, and no tool will save you.

Where I’ve Landed (Again)

I started this year thinking AI was mostly a technology problem.

I end it convinced it’s mostly a leadership opportunity and problem.

The tools matter. The guardrails matter. But the humans how we design work, support learning, and talk honestly about change matter more.

I’m still experimenting. Still changing my mind. Still very aware that this isn’t settled.

And that’s okay.

Because the goal was never to arrive. It was to learn how to move forward thoughtfully, ethically, and together.

Previous
Previous

Sparkly Red Sneakers and the Art of Authentic Networking

Next
Next

Leadership Lessons From My Grandmother: Wisdom From a Woman Without a Degree