How to build a content machine with AI


Hello Reader

Half of all marketing teams are still using AI the way they were two years ago. They open ChatGPT, paste a brief, spend 45 minutes massaging the output, publish it and move on to the next one.

The Averi AI 2026 benchmarks report says 50% of content teams sit at what the report calls Level 1—ad hoc usage, with no architecture or system behind supporting it.

Only 5% have reached Level 4, where AI agents create, publish and iterate with human oversight at the strategy level.

I wanted to know what Level 4 looks like from the inside, so I sat down with Erin Mills, CMO of Quorum, to walk through her team’s content production system.

Quorum is a platform for government affairs professionals, where Erin leads a 15-person marketing team. Her team built a multi-agent workflow on Relevance AI that turns executive interviews into published thought leadership in a single day.

That same cycle used to take her team two weeks.

This is a step-by-step look at how the system works, what architectural decisions made it possible and what it replaced.

The Quorum Agent Content Machine

Erin’s system uses a group of specialized agents, each assigned one task that's coordinated by a project manager agent that routes the work and monitors quality.

The production workflow has five stages.

#1 The trigger

The first step (and arguably the most important) in the system is collecting the source material. Erin does this by creating an ElevenLabs voice agent that conducts the interviews, just like a human would.

So instead of someone from the marketing team sitting on a call with Quorum’s CEO, he can open the agent and answer questions whenever he has time.

“We designed the agent to ask really tough questions. That way we end up with usable source material," says Erin.

This replaced a process that involved a marketing coordinator finding time on the CEO’s calendar, prepping questions, conducting the interview, transcribing it and distributing the raw material to the writing team.

Now the CEO hits record whenever the time works for him.

#2 The tone & style layer

Each thought leader at Quorum has a dedicated tone and style agent. The system ingests their past writing, social posts and call transcripts to map their tone and phrasing.

"We designed a different agent for each executive, because it’s the only way to capture their voice," says Erin. "Once we’ve trained these agents, they sound very much like the executive right out the gate."

These agents also update in real time, which is the only way to solve for model drift—a complaint I hear frequently from marketers. Most teams solve it by editing the final draft. Erin’s system solves it by feeding the right data into the right agent before the first draft is even produced.

#3 The swarm

When the interview ends, the transcript triggers the full workflow in Relevance AI. The project manager agent takes the transcript and distributes it across specialized agents: a research agent that pulls adjacent data and industry context, a LinkedIn scraper that gathers recent social activity, a writer agent that produces the draft and a QA agent that checks for hallucinations.

Erin explains, "It’s like a giant swarm of different agents, with your top level being a project manager, and the other agents handling the workload."

#4 The QA critic

Before anything reaches a human reviewer, the draft passes through a Gemini Gem that Erin’s team calls “Before it ships.” It functions as an internal critic, and Erin describes it as deliberately harsh. "It’s really mean. It’s like this terrible mean Gem. It tears you up."

#5 The human review

After the QA agent runs, the draft goes to Erin’s content team for a final review. This is the human-in-the-loop checkpoint: the last gate before anything goes live.

"This step used to be something that would take us a couple weeks to do and now takes us about a day," says Erin. "Which means we have more time to focus on producing better content."

Setting up The Agents

One Agent Per Job

The design principle behind Erin’s system is worth calling out because it explains her team's high output quality.

The key? Every agent does one thing.

The scraper scrapes. The researcher researches. The writer writes.

No agent carries the full weight of content production. The project manager agent coordinates the workflow, but it does not draft, research or check quality itself.

"Best practice is creating an agent that does one thing really, really well," says Erin. "If you think about a new hire that’s just out of college, if you give them a giant project to work on, chances are you’re not going to get what you’re actually looking to get."

This architecture also prevents drift.

I asked Erin whether she experiences the mid-conversation fog that happens when a single AI session tries to hold too many instructions at once.

"Our content windows for each agent are not large because we’re doing something very specific. So we don’t have the same level of drift as when I’m working on Claude and I’m trying to brainstorm."

The data supports her approach: according to a 2026 analysis of agentic AI systems, multi-agent architectures outperform single-agent approaches by 90.2% on complex tasks.

The 3 Components of Every Agents

When I asked Erin what goes into configuring an agent, she broke it down into three building blocks.

#1 The LLM

Erin picks a different model for each agent depending on what the task requires. Claude for writing. A cost-friendly model for scraping LinkedIn. Gemini for deep research. The platform lets her swap models per agent without rebuilding anything.

"I love that Relevance AI allows you to choose the model that you prefer," she says. "It means you can be really efficient with your spend."

#2 The memory

Memory determines how much context the agent retains. Options range from simple session memory to a full Postgres database. For content agents, the memory is intentionally small to prevent the drift described above.

#3 The tools

Tools are the integrations that let agents act with the outside world: pull call transcripts, push data to the project management platform, scrape LinkedIn profiles, send emails.

Erin’s system connects to HubSpot, Gong and Airtable through API keys stored inside Relevance AI, no engineering involvement required.

One more detail worth noting: the Relevance AI marketplace lets teams share agents across the organization. Other departments at Quorum can pull agents from marketing’s library and add them to their own workflows.

"If I have an agent that somebody wants to use they don’t have to duplicate it," says Erin. "They can just add it to the swarm or workforce that they’re creating."

You Already Think This Way

I asked Erin whether there was a mindset shift involved in moving from a linear, prompt-and-respond pattern to building a coordinated system like this.

She explains, "As marketing leaders we’re already thinking in systems. How do you get a team to work together to achieve your objectives? It’s the same concept with agents."

Erin admitted she has started treating her agents the way she would treat human team members.

"My team makes fun of me all the time for that because I sometimes give them genders. It does sort of feel like they’re extended team members."

If you’re thinking this is too much of a challenge to set up, Erin will disagree. She built this entire system without an engineering background.

"I don’t think you need to be super technical, but I do think you have to have tenacity and be curious about solving a problem and be able to identify clearly what the problem is," she says. "If you just go in to play with it, you might get really frustrated."

How to Build a Multi-Agent System

A setup checklist

Want to build your own multi-agent content system but aren’t sure where to get started? Here’s a quick checklist:

Before you build

  • Write down the specific content workflow you want to replace, including the steps, who owns each one and how long each takes
  • Identify the bottlenecks that slows everything
  • Select a platform such as Relevance AI or Lindy

Train your agents

  • Create a separate agent for each thought leader, topic or program
  • Train it on past articles, social posts and call transcripts

Designing the swarm

  • Assign each agent one task only: research, writing, QA, LinkedIn scraping
  • Set up a project manager agent to route work between agents
  • Keep context windows small for each agent to reduce drift

Setting up checkpoints

  • Create a dedicated QA agent or Gem with a deliberately critical prompt
  • Test it against a draft you already know has problems to calibrate its harshness
  • Confirm it checks for hallucinations before output reaches a human reviewer

Before you call it done

  • Run one full cycle from raw interview to published piece
  • Time it and compare against your baseline
  • Document what broke, what the agent couldn't handle and what required human intervention

Want to Level Up Your AI Game?

If your team is ready for a hands-on AI strategy session, my custom-designed workshops are built to uncover the workflows that can save you hours every week.

Prefer to start small? My YouTube channel is packed with quick, practical “how-to” videos that show you exactly how I use AI tools for marketing, content, and automation.

Planning an event or conference? I deliver high-energy AI sessions that engage audiences and leave them with actionable strategies they’ll talk about long after the event. Book me for your event here.

Did some one forward you this email? You can subscribe here.

2120 Contra Costa Blvd #1059 , Pleasant Hill, CA 94523
Unsubscribe · Preferences

AI at Work

AI at Work is a weekly newsletter on how marketing teams redesign workflows, roles, and systems with AI. Real examples, practical frameworks, and repeatable processes operators can use immediately. Join thousands of successful marketing leaders by subscribing below!

Read more from AI at Work
AI agents at work

Hello Reader You're reviewing a campaign brief. It's well-written, covers all the bases and landed in your inbox without anyone sending you a draft to review first. Halfway through, you realize you don't know how it was made, who shaped the thinking in it, or who you'd call if the campaign went sideways. The brief looks finished but you have no idea who’s accountable for the work. If your team is using AI tools and agents, you're probably seeing this more than you care to admit. Moving from...

Hello Reader I spent yesterday at the Phocuswright AI Marketing Summit in New York City and during my session, I asked the room to raise their hands if they had an agentic AI program in the field. About half the hands went up. Then I asked how many were seeing real success with it. One hand stayed up. Adoption is widespread. Results are rare. And while teams are still working out what produces outcomes, the platforms are accelerating without them. Microsoft shipped a new version of Copilot....

One marketer team

Hello Reader A single growth marketer at Anthropic handled paid search, social, email and SEO for 10 months using Claude Code. The company is now valued at $380 billion. Of course, the story went viral and every marketing leader I’ve talked to since has the same two reactions: That’s incredible and what does this mean for my team? This week I’m sharing what I’m hearing from CMOs across my network and what it means if you’re running marketing at an established company with real customers and...