How Pencil and Claude Code closed the content pipeline gap

Graphics were the missing piece in our signal-driven content strategy. A design MCP server inside Claude Code completed the pipeline from signals to published post.

· Page Sands

I had most of a content pipeline running inside Claude Code. Signal collection from X and Reddit. Keyword analysis. Content briefs. Writing. Social copy. All in one session, all informed by the same brand docs.

But every post went out with no visuals. The graphics step meant switching to a different tool, rebuilding context, and usually deciding it wasn’t worth the time. Text-only social posts. No title cards. No diagrams. The pipeline was 90% complete, and that last 10% was the part people actually see first.

A design MCP server closed that gap. Design tools like Pencil and Figma now run as MCP servers inside Claude Code, so graphics get built in the same session as everything else. That changed what the pipeline produces.

Here’s how the whole thing works, and why graphics were the piece that made the rest of it land.

How Pencil and Claude Code closed the content pipeline gap

The pipeline before graphics

Before getting to the gap, here’s what was already working inside Claude Code.

Signals. Two shell scripts pull raw signals from where practitioners talk. scripts/pull-x-signals.sh searches the X API with three targeted queries (Claude Code + GTM, AI agents + B2B SaaS, Claude Code + automation workflows). scripts/pull-reddit-signals.sh searches six subreddits (r/ClaudeAI, r/ClaudeCode, r/SaaS, r/AI_Agents, r/b2bmarketing, r/sales), filtering by minimum score and excluding noisy communities. One command (/project:sync-signals) runs both, reads the results, and compares them against the current content plan. I wrote a longer walkthrough of how the signal sync works.

Keywords. The site maintains a keyword plan built from 24 seed keywords expanded to roughly 80, with SERP analysis across 20 queries. When a signal theme emerges, the first check is whether it maps to a keyword cluster with search demand and low competition. If a cluster of X posts discuss Claude Code for competitive intelligence, and the SERP shows nobody ranking for it, that topic moves up.

Briefs. A topic that clears both the signal filter and the keyword filter becomes a content brief: primary keyword, content pillar, signal evidence, SERP gap, and angle. The brief also pulls from four brand docs (voice-profile.md, positioning.md, audience.md, writing-guide.md) that define how everything gets written.

Writing. Claude Code drafts directly into src/content/blog/ with correct frontmatter, slug, and H2 structure. First drafts take maybe 10 minutes. The editing is the rest. I read, cut, tighten the voice, and verify that signal evidence is accurate. A typical post takes 30 to 60 minutes total. The content workflow teardown covers this step in more detail.

Social copy. Each post generates platform-specific copy. X gets a short hook referencing the original signal. LinkedIn gets a longer format with the key takeaway above the fold. Reddit gets a genuine comment in the relevant thread, with the link secondary to the contribution. All written in the same session, using the same brand docs.

Graphics: the gap a design MCP server closed

Every other stage of the pipeline lived inside Claude Code. Signals, keywords, briefs, drafts, social copy. All in one session, all drawing from the same brand docs and keyword targets. Then I’d finish a post, write the social copy, and stop. Because the next step was opening a design tool in a separate window, recreating the context, and designing something that matched the site’s look.

Most of the time, I skipped it. The post went live with no title card. The social copy went out as plain text. LinkedIn posts without an image get buried by the algorithm. X posts without a visual scroll right past. I knew this. I skipped it anyway because the context switch felt like starting over.

The fix turned out to be design tools that run as MCP servers inside Claude Code. Figma has one. Pencil has one. The point isn’t which tool you pick. The point is that graphics become part of the same session as everything else. Same terminal. Same brand context that shaped the writing and social copy. No tab switching. No re-explaining what the post is about.

I went with Pencil because it’s lighter weight for the kind of graphics this pipeline needs: title cards, pipeline diagrams, pull quotes. It’s not a full design environment, and it doesn’t need to be. If your content graphics require detailed layout work or component libraries, Figma’s MCP server would make more sense.

Here’s what the workflow looks like. After writing this post and its social copy, I asked Claude Code to create a title card, a pipeline diagram, and a pull quote graphic. Pencil had access to the post content and the brand voice docs from the same session. I set up a basic style guide in Pencil with the site’s palette and type choices, and it applied those consistently across each graphic.

For each post, the graphics now include:

  • A title card sized for social sharing. It goes out with the post link on X and LinkedIn.
  • A pipeline diagram or framework visual that captures the post’s core idea. These work as standalone images when someone shares the post.
  • Pull quotes formatted as shareable graphics. A single sentence from the post, designed to be screenshot-friendly.

The visual system stays consistent because the design tool works from the same constraints as the writing. The brand docs that control tone and structure also inform the design. When those constraints live in the same session, consistency happens by default instead of by effort.

Before adding a design MCP server, graphics were aspirational. I planned to make them. I rarely did. Now they’re built in the same 30-to-60-minute window as the post itself. They’re not an extra step. They’re part of the same flow.

The content pipeline: from live signals to published post

What this pipeline produces

Since launching in late February, this pipeline has produced nine published posts across all three content pillars, each one traceable to specific signals, targeted at a specific keyword cluster, and distributed with platform-specific social copy.

The posts that perform best are the ones where the signal was strongest. When multiple X accounts and Reddit threads are discussing the same gap, and the keyword analysis shows nobody’s filling it, the resulting post tends to rank fast and get shared. The competitive intelligence workflow post came together exactly this way.

The posts that underperform are usually the ones where I skipped a step. Wrote from a hunch instead of signals. Targeted a keyword without checking the SERP. Published without social copy. The pipeline works when you run the whole thing. Shortcutting any stage shows up in the results.

Analytics: PostHog closes the feedback loop

The pipeline used to be one-directional. Signals in, content out. No way to know which posts actually held attention or which keyword clusters were worth doubling down on.

PostHog changed that. The pipeline already had scripts/pull-x-signals.sh for the X API and scripts/pull-reddit-signals.sh for Reddit. A third script, scripts/pull-posthog-analytics.sh, queries PostHog’s API using HogQL. It pulls six data sets: top pages by views and unique visitors, daily traffic, referrers, scroll depth by page, outbound clicks, and device breakdown. The output goes to drafts/posthog-analytics.md. The /project:sync-signals command runs all three scripts together.

What this adds to the content plan process: when I review signals and keywords for the next post, I also see which published posts are actually performing. The Lighthouse score post has high traffic and near-perfect scroll completion. That tells me the audience reads technical walkthroughs to the end. The CLAUDE.md for GTM post gets unique visitors and holds them through 75% scroll depth, but only half make it to the end. That’s a signal that the opening works but the post loses people in the back half.

Referrer data shows where traffic actually comes from. Google and Bing drive most organic visits. That validates the SEO-first approach in the keyword plan. Traffic spikes align with publishing days, which means distribution (social copy, Reddit comments) is doing its job on launch day, but there’s no long tail yet from search alone on most posts.

The feedback loop isn’t fully automated. I still read the analytics and make judgment calls about what to write next. But the data is now part of the same sync command, in the same format, reviewed in the same session. That’s the difference between having analytics and actually using them.

What’s next

The remaining gap is signal scoring. Right now, /project:sync-signals surfaces all signal themes equally. A cluster backed by 12 X posts and 5 Reddit threads with high engagement should rank higher than a single mention with two likes. Weighted scoring would make the content plan recommendations sharper.

For now, the pipeline covers signals, keywords, briefs, writing, social copy, graphics, and analytics. All in one session. Every post on this site was built this way, including this one.

Graphics were the bottleneck. Pencil changed that.