Hello Reader
You're reviewing a campaign brief.
It's well-written, covers all the bases and landed in your inbox without anyone sending you a draft to review first. Halfway through, you realize you don't know how it was made, who shaped the thinking in it, or who you'd call if the campaign went sideways.
The brief looks finished but you have no idea who’s accountable for the work.
If your team is using AI tools and agents, you're probably seeing this more than you care to admit.
Moving from tasks to systems
Early in my career as a marketing director, I thought mostly in terms of tasks. What needed to get done, who I could assign it to and how quickly we could move. That approach got work out the door.
It also meant I was managing the doing without much thought for the system behind it. That distinction didn't matter when the system was a group of specialists with clear lanes.
Kimber Spradlin, CMO at Graylog tackled a similar challenge. She trained her team early on AI, and the team dove in, but no one was ready for agentic AI.
“We tried following the ‘start with a use case’ best practice, but it quickly became overwhelming because of all the custom GPTs the team had already built,” she says. “There were fears of moving backwards and individuals didn't want to give up their personal GPTs—even when their use case was universal to the team.”
Things didn’t start to click until Kimber reworked the system.
Everyone’s role is changing
Gartner research shows that more than half of marketing leaders expect AI to reshape their role within two years, with more responsibility for AI governance, end-to-end customer experience and revenue accountability.
The day-to-day work is already harder to manage and most leaders are trying to answer a new set of questions:
- Who is accountable for the final output when AI is part of the process?
- How do you assign ownership when work spans multiple people and agents?
- What breaks in day-to-day management as output scales?
Clara Shih at Meta says human work is compressing into three jobs: building products, selling products, and running the company. Everything else—coordination, routine tasks and handoffs—will move to agents.
When agents take on execution, the scope of each person’s role expands. People are responsible for outcomes that cut across functions and swim lanes.
Shih gives three examples:
- The actual job of design or software engineering isn’t producing mockups and code, it’s building great products people pay for and love.
- The real job of marketing isn’t campaigns or content, it’s driving sales.
- The purpose of corporate accounting isn't the tasks of sending invoices and paying bills, ensuring the company has cash to fuel operations and growth.
When roles expand this way, accountability becomes harder to assign because the work doesn’t map to a discrete task with a clear owner.
A campaign that crosses three functions and involves two agents at different stages of production can be finished, reviewed and sent without anyone having explicitly signed off on the outcome.
That gap is where things go wrong.
When AI agents go rogue
Researchers at Northeastern University published a paper called "Agents of Chaos", describing how their six test agents went rogue in 11 out of 16 tasks—sharing private information without permission, bulk-deleting files, or making decisions no one asked for.
We’ve heard a steady stream of unintended consequences from people unleashing agents on their systems. Summer Yue, Meta’s Director of Alignment experienced this firsthand when her OpenClaw decided to delete her entire inbox and she literally had to pull the plug.
AI agents are now part of how teams operate, so guardrails need to be defined upfront.
The International AI Safety Report 2026 recommends limiting permissions to your tools (or giving them read-only access), always reviewing their output (especially if the content is public), and, when in doubt, only using them for low-stakes tasks.
Safety and compliance is top of mind for Kimber and an essential step before activating agents. “Being a cybersecurity company, there's a heightened awareness around security as a brand risk,” she says. “So of course our internal team wanted to work through that.”
Workflow design is now part of the job
When agents can execute without constant oversight, our internal structure matters more than the work itself. The Marketing AI Institute's data shows that demand for AI orchestration, data fluency and prompt engineering is displacing traditional marketing skills.
I recently worked with a team of 30 marketers who had fully embraced AI. Everyone was building their own agents and automating key areas of their work.
At first, it looked like progress, but then issues started popping up.
Two teams were solving the same problem in parallel without realizing it. Approval steps were skipped. Campaign outputs needed double the amount of revision than before. People started to feel like managing the agents took more time than doing the work themselves.
All of the agents were doing what they’d been built to do, but no one had thought about the system.
Kimber’s approach meant taking a step back and breaking all the use cases into constituent parts. “We needed to identify what could be built and reused across multiple workflows and agents.”
She vibe coded an interactive workflow design and documentation tool. Anyone on her team can use the tool to help them identify how an agent should work, including the triggers, steps, outputs, and sub-agents required.
“It took me 15 minutes to get a working prototype in Claude Desktop and a few more hours adding features and turning it into a function app” says Kimber. “And now we have a tool that helps us visualize our workflows and optimize the way we work with agents.”
Fixing the workflow solves part of the problem. It brings consistency, reduces duplication and makes agent behavior easier to manage.
But it doesn’t answer a deeper question: how the team itself should operate when work no longer sits neatly within roles.
Why this requires a new team structure
Most employees know what it’s like to be part of a vibrant team that works toward a common goal. They feel a strong sense of belonging, they can learn and grow together, they can give and receive constructive feedback.
“Create an environment where curiosity is safe and celebrated, where guardrails are in place, and enablement is established,” says Cathy McPhillips, CMO, SmarterX. “Leaders should set examples on using AI to make smarter, faster decisions. Determine how AI can assist current workflows and how it can make work better. AI shouldn’t feel like extra work. Consistently building and nurturing a learning culture is so important.”
Activating these value-creating teams, requires giving them a clear mandate, aligning them around outcomes, agreeing on how to execute, and instilling practices for learning and feedback.
One global oil and gas company adopted this model as it shifted toward decarbonization. It moved from function-based teams to cross-functional groups with shared goals and faster feedback loops.
Members shifted from a mindset of “this is my part” to “we’re all in this together.” People contributed beyond their job descriptions, learned across functions and focused on collective outcomes instead of individual tasks.
The skills that make you good at leading people still apply
A Harvard Kennedy School study conducted an experiment where they compared leaders working with human teams against leaders working with AI agents to solve a series of problems.
They found that leaders who are good with people can transfer those same skills to leading AI.
And what are those skills?
Good leaders ask more questions, engage in more conversational back-and-forth with their teams and use phrases like "we" and "us." They also score higher on measures of social intelligence, fluid intelligence, and decision-making skill, but don’t differ in gender, age, ethnicity or education.
These skills keep your human and AI team aligned, but you can’t forget about workflow or structure. As projects scale across people and agents, clarity on ownership, outcomes and security are even more critical.
Leaders who define them deliberately will outperform those who let their agents go rogue.