|
If it feels like everyone is panicking about AI right now, you’re not imagining it.
A lot of the loudest takes are designed to travel. They’re crafted to provoke urgency and fear, and that noise can make even experienced leaders feel overwhelmed.
My take? Don’t build your strategy on someone else’s hyperbole. Instead, zoom out and get intentional about how you’re using AI at work.
This week’s theme connects three ideas:
- Slow down. AI speeds execution, but judgment, alignment and decision-making still take time.
- The job is changing. The skill stack is shifting toward intent-setting, workflow design and governance.
- Beware of AI agent risks. AI agents are impressive, but autonomy without guardrails creates exposure.
Want to lower your stress levels?
Then read on.
|
|
|
When Everything Speeds Up, Slow Down
Last week, investor Matt Shumer lit a match on social media with an essay called Something Big is Happening, that’s now crossed 80 million views.
He wrote that he was “shocked” to discover AI could perform all the technical work of his job. He argued that lawyers, marketers and accountants are about to feel the same shock.
I read it, and I understood why it spread so fast. It taps into a real fear: if AI can do the work, what exactly am I here for?
But Shumer’s position isn’t the only one on the future of AI. Ann Handley published a quick response and she pushed back hard.
She argued Shumer's focus on speed and efficiency misses the point. The real danger is letting someone else’s clock and definition of winning dictate how you work and live.
AI can outperform doctors, lawyers and PhD students on standardized tests. It can write decent copy, analyze campaign data, draft strategies and build slide decks in seconds, but raw intelligence is not the same as real-world judgment.
As Handley highlights, a senior marketer is not valuable because they can write copy faster. They are valuable because they know which message will land with which audience. They can sense when a campaign feels off before the data shows it.
AI is useful for many things, but it can’t sit in a room when a CEO wants to pivot strategy three weeks before launch (we’ve all been there!).
This is where I think a lot of us are getting distracted. We’re racing to be faster, to automate more, rushing to prove we can do everything in half the time.
When the pace accelerates, our instinct is to run, and run fast.
As Handley suggests, sometimes the smarter move is to slow down long enough to ask better questions:
- What am I trying to build?
- What is worth protecting because it compounds over time?
- Where does friction create value instead of destroy it?
- What part of my work has value even if AI could do it faster?
If you define your value by output volume, AI will beat you, but if you define your value by judgment, taste, leadership and decision-making under pressure, you have leverage.
As a marketer and operator, you need to:
- Use AI to handle the repeatable technical work.
- Double down on judgment and buyer understanding.
- Design workflows where you are the architect and decision-maker (not the typist).
- Protect the parts of your craft that compound: trust, brand, relationships, credibility.
Panic is not a strategy. Panic makes you reactive and pushes you to chase speed over value.
The people who win today will be the ones who can hold two ideas at once: move fast with the platforms, and slow down when it comes to judgment.
|
|
|
The Hidden Cost of Autonomous AI
A few weeks ago, OpenClaw burst onto the scene and within days, people were buying Mac Minis and letting it run autonomously on their systems.
YouTubers and X influencers were extolling the wonders of this new system. It can read local files, interpret documents, update spreadsheets and generate outputs directly inside your existing folders.
On paper this sounds incredible.
Hours saved! Entire projects completed while you sleep! Complex problems solved!
In practice, OpenClaw is a security nightmare waiting to happen.
“To do useful things like reserving your hotel room, getting your pizza delivered, or cleaning up your e-mail box, it needs your name, password, credit-card number — and all the other things any crook also wants,” says Computerworld’s Steven Vaughan-Nichols.
If you're leading a team, you have good reason to be cautious about tools and platforms like OpenClaw.
Here are four key risk areas you need to keep in mind:
1.Access Is Risk
Any AI system that can read internal documents, trigger workflows or access CRM data is effectively an employee with admin privileges.
Define what it can access and what it cannot. Always default to the most restrictive permissions possible for it to operate.
2. Make Data Boundaries Explicit
Create a simple policy for team members that outlines:
- What data can be entered into public tools
- When to use approved platforms
- When human review and approval are required
I regularly speak with leaders who assume their teams will navigate this responsibly without formal guidance.
That approach introduces unnecessary risk.
Without clear guardrails, it’s only a matter of time before someone makes a decision that exposes sensitive data or creates a preventable security incident.
3. Build in Human Review
Autonomous agents are impressive, but they’re also unpredictable. Don’t let these agents send external communications without approval, publish content without oversight or trigger financial or legal actions independently.
Automation should assist execution, not replace accountability.
4. Scrutinize Vendors
Before approving any new AI tool, ask:
- Where is the data stored?
- How is it trained?
- What retention policies exist?
- What contractual protections are in place?
Tools like OpenClaw generate excitement and showcase what's technically possible, but introducing them into your team’s workflows without proper review creates real exposure.
If you are leading a team, your responsibility is to define clear guidelines and permissions, articulate the risks and hold people accountable for how these tools are used.
|
|
|
Leading Responsible AI Adoption
If OpenClaw is an example of what happens when excitement outruns governance, Klaviyo offers a useful counterpoint. Instead of chasing headlines, Klaviyo focused on helping 1,800 employees find one meaningful AI use case each.
Teams were encouraged to experiment within defined boundaries, share learnings and surface their wins. Rather than employees panicking, they created steady adoption and maintained company morale.
If you’re leading AI adoption with your team or company, you can borrow these tactics from Klayvio’s approach:
- Maintain clear leadership support without reckless autonomy
- Focus on experimentation instead of blanket tool proliferation
- Encourage internal knowledge sharing so wins can compound
- Define use cases tied to business outcomes
Discipline scales better than excitement, and certainly better than handing out AI licenses without a plan.
You can read the full case study on the Section blog here.
|
|
|
The New Skill Stack for Human + AI Teams
If AI (like OpenClaw) can draft, analyze, summarize and scale, then the question becomes:
What’s left for us?
Plenty.
As much as we might want it to, in a Human + AI model, the work doesn’t disappear. Ask any overwhelmed marketing or ops lead who still feels behind.
The key to building a high performing team is understanding how to design the system and divide the labor. In this model, AI handles synthesis, testing and scale, while humans set intent, define standards and make decisions.
If you want to lead (or operate) through this change, you need a structure for how humans and AI work together.
Here are five areas that determine whether AI becomes leverage or liability:
1.Setting Intent
Before AI touches anything, someone must define:
- Business objective
- Audience boundaries
- Constraints Success criteria
Vague intent produces generic output and creates tedious rework for the team. Clear intent drives higher quality outcomes and streamlines your workflows.
2. System Design
In the old model, human team members do all the work—they are the sole drivers of strategy, creation, and execution. The tools and platforms are purely supportive, acting as utilities that help them complete specific tasks, such as scheduling a post or running a basic report.
In a hybrid model, humans act more like creative directors and architects. They design the workflow and delegate the appropriate processes and tasks to AI.
That means:
- Deciding where AI drafts and where humans review
- How to integrate feedback loops
- Where to set approval gates
3. Judgment Inside Ambiguity
AI performs well inside defined parameters, but it struggles when rules shift.
This is where humans shine:
- Navigating executive pivots
- Reading cultural context
- Interpreting incomplete data
- Managing trade-offs
4. Narrative and Alignment
AI can produce content, but it’s not in the room with your head of product or CEO. AI isn’t the one driving alignment across stakeholders.
Hybrid teams require people who can:
- Translate insight into decisions
- Frame trade-offs clearly
- Bring sales, product and leadership onto the same page
5. Governance and Standards
As AI scales output, so does risk.
Someone must:
- Define quality standards
- Set guardrails
- Protect brand and trust
- Own accountability
The more automation you introduce, the more oversight and direction your AI needs.
|
|
|
Building an AI-Ready Marketing Engine
If this week’s newsletter resonated, you’ll want to join my upcoming workshop:
AI for Marketers next Thursday (Feb 25)
It’s a practical workshop focused on how AI supports strategic marketing initiatives, content creation and campaign execution — without compromising judgment, brand or governance.
We’ll cover:
- Where AI meaningfully strengthens marketing strategy
- How to use AI in content workflows without diluting voice
- Smart automation for campaigns that preserves oversight
- How to design workflows where humans lead and AI accelerates
This workshop will help you integrate AI deliberately and responsibly.
You can learn more and register here.
|
|
|
Want to Level Up Your AI Game?
If your team is ready for a hands-on AI strategy session, my custom-designed workshops are built to uncover the workflows that can save you hours every week.
Prefer to start small? My YouTube channel is packed with quick, practical “how-to” videos that show you exactly how I use AI tools for marketing, content, and automation.
Planning an event or conference? I deliver high-energy AI sessions that engage audiences and leave them with actionable strategies they’ll talk about long after the event. Book me for your event here.
|
|
|