My AI Beliefs and Guardrails… for now
One year in, still learning
I didn’t start the year 2025 fluent in AI.
I started it curious and feeling more than a little behind. (If you’re still feeling behind, McKinsey’s State of AI in 2025 found that while 88% of respondents are using it in one function, enterprise wide most people are in pilot mode)
I had very little in the way of real skills or hands-on experience. I wasn’t experimenting. I was mostly reading. Watching what others were building. Sharing articles. Nodding along.
So I started small.
ChatGPT. One Coursera course. A few awkward prompts. A lot of “well… that didn’t work.”
Then more Coursera courses. I because a huge fan of Jules White’s courses and now am following his work at CR8.io
Then building custom GPTs.
Then building an app with Replit.
Then trying other models like Claude, Perplexity and Gemini. I was both surprise and then not to learn that only 10% of ChatGPT users were checking out other models, according to az16’s State of Consumer AI in 2025. I definitely went all in at first and now look at outputs across models.
Then using Zapier to connect actions between Notion and my personal calendar. I
And then I realized I was obsessed. Not with the technology itself, but with what happened when I actually used it. When I broke workflows. When I tried to automate something mundane. When I realized how quickly confidence grows once fear gives way to curiosity.
From there, the year turned into a cycle:
Learning. Experimenting. Failing. Learning again. Occasional wins. Long pauses to think. Listening to people who were further ahead.
Listening to people who thought they were behind.
Becoming a genuine fan of others’ work.
Joining Communities like Women Defining AI and Human Cloud to learn from others
Being honest, sometimes publicly, about what AI absolutely cannot do.
I haven’t “landed.” And I don’t think anyone has.
The technology keeps changing. The tools evolve. The hype resets every few weeks. With every new possibility comes a new set of risks, blind spots, and temptations to oversimplify something that is anything but simple.
What I have done is ground myself.
Not in tools. Not in predictions. But in a set of beliefs and guardrails that help me decide how to use AI and when not to.
What I Believe
AI is a multiplier, not a replacement. It doesn’t create judgment, taste, ethics, or courage. It amplifies what already exists. If a process is thoughtful, AI can enhance it. If it’s broken, AI just helps it break or be messy faster.
The best AI work starts closest to the work. Not in strategy decks. Not in executive brainstorms. It starts with the people doing the job every day when you slow down enough to listen and let them experiment without fear. Watching their curiosity and excitement grow fuels mine too.
Fluency matters more than access. Buying tools is easy. Building confidence is hard. Real impact shows up when people understand when to use AI, when not to, and how to question what it produces.
Automation is a two-sided responsibility. When we automate, we can’t just optimize for the person doing the work. We also have to design for the person on the other end of the process.
AI and automation should make it better to do the job and better to experience the company. If efficiency improves internal metrics but degrades trust, clarity, dignity, or service, we didn’t improve the system. We shifted the burden.
My Guardrails
AI supports decisions. Humans make them. Decisions require context, accountability, and humanity. Full stop.
Employee experience is a design constraint, not a tradeoff. If an automation makes work faster or less expensive but leaves employees confused, monitored, or disengaged, it failed. AI should reduce friction in how people work and how they experience being an employee.
Transparency beats secrecy. When people hide their AI use, that’s not a policy problem. It’s a trust problem. I aim for openness, shared learning, and fewer whispered workflows.
No black boxes in high-stakes work. If I can’t explain how an output was created or more importantly challenge it, I won’t use it where trust, safety, or livelihoods are involved. Actually, I won’t use it at all.
Automation should remove friction, not meaning.
I’m interested in automating the tedious, repetitive, soul-sucking parts of work so people can spend more time on judgment, creativity, and connection.
Experimentation needs guardrails, not fear. Bans drive behavior underground. Clear boundaries create learning. I’ll choose the latter every time.
Where I’ve Landed (For Now)
AI isn’t a strategy. It isn’t leadership. And it definitely isn’t a shortcut around understanding how work actually gets done.
Used well, it gives time back. It improves clarity. It makes both sides of a process better.
Used poorly, it just adds speed to dysfunction.
I’m still learning. Still experimenting. Still changing my mind as the technology evolves. That’s the work.
My north star remains simple:
Human judgment stays in the driver’s seat. Employee experience stays non-negotiable. AI is a tool like so many others we use..
And honestly? That feels like solid footing. For now.
My best advice: Stay curious.