Signal sync is not a system yet

I extended the X signal sync to include Reddit. The plumbing is cool. But pulling signals from two sources still does not make it a system.

· Page Sands

In the last post, I described how I connected the X API to Claude Code and used it to build a content plan from real practitioner signals. A shell script, a Claude Code command, and a markdown file that grows over time.

It worked well. So I extended it.

What changed

The signal sync now pulls from Reddit too. A second script (pull-reddit-signals.sh) hits Reddit’s public search API with the same query categories: Claude Code + GTM, AI agents + B2B SaaS, and Claude Code + automation workflows. No API key needed for Reddit. Just curl and jq.

The first run pulled 75 signals. Most of them were noise. Stock crash commentary, Udemy coupon spam, resume reviews that happened to mention “GTM.” About a third of the results were useless.

So I tightened it up. Three changes:

  1. Targeted subreddit searches. Instead of only doing broad keyword searches, the script now searches specific communities first: r/ClaudeAI, r/ClaudeCode, r/SaaS, r/AI_Agents, r/b2bmarketing, r/sales. The subreddit itself provides context, so the queries can be simpler.

  2. Subreddit exclusion list. Stock trading subs, coupon subs, crypto subs. They consistently return irrelevant results. Now they’re filtered out.

  3. Minimum score threshold. Reddit posts sitting at 0-1 with zero comments are usually self-promotion or spam. A minimum score of 2 cuts the noise while keeping anything the community actually engaged with.

Same signal count after filtering. Much higher quality.

The /project:sync-signals command now runs both scripts, reads both signal files, and analyzes them together against the content plan. One command, two sources, one set of recommendations.

What the two sources look like side by side

This is where it gets interesting. X and Reddit surface different types of signal about the same topics. X tends toward announcements, hot takes, and practitioner flexes. Reddit tends toward questions, frustrations, and longer-form experience sharing.

When you line them up, a clearer picture emerges than either source provides alone.

X signalReddit signalCombined insight
Practitioner runs full prospect enrichment pipeline in Claude Code, no code writtenSolo founder asks r/ClaudeAI whether to use Claude Agents and Skills for their MVPUse cases. People aren’t debating whether to use these tools. They’re asking how to use them better. The content gap is practical workflow guides, not “why AI” posts.
SaaStr: “You need a GTM nerd managing your AI agents. Not yet ready for agents managing agents.”Demand gen lead in r/b2bmarketing considering leaving SaaS entirely because of AI uncertaintyReadiness. The gap isn’t the tools. It’s knowing which humans stay in the loop and what their role becomes. Two different expressions of the same anxiety.
Founder deployed 14 AI agents running their entire GTM: research, outreach, pipeline QA, content”I replaced my $300/hr marketing consultant with 4 AI agents that argue with each other. Here’s why it works better.” (r/SaaS)Vision. The multi-agent GTM org is forming. X shows the scale. Reddit shows the reasoning. Both point to the same shift: agents as team members, not tools.
YC company launches MCP that turns Claude Code into a “lean, mean GTM machine""I tried vibe coding a full SaaS with Claude Code + n8n. This surprised me.” (r/n8n)Use cases. A GTM stack is emerging around Claude Code, MCPs, and workflow tools like n8n. Worth documenting what the stack actually looks like in practice.
”Trust-market fit” framed as the real inflection point for AI agents in enterprise”Everyone talks about AI replacing SaaS. My customers still don’t use the features we shipped two years ago.” (r/SaaS)Readiness. Adoption is slower than the discourse suggests. That’s not a problem. That’s the opportunity for content that meets people where they actually are.
Cody Schneider gives his whole marketing team an AI data analyst that lives in Claude Code and their appsOpenClaw alternatives thread hits 180 upvotes and 61 comments in r/AI_AgentsVision. Teams are shopping for agent infrastructure. The question has shifted from “should we?” to “which one?”

The three pillars (Vision, Use Cases, Readiness) map cleanly to different stages of the same conversation. X catches what’s being built. Reddit catches what’s being asked. Together, they triangulate what content is actually needed.

This is not a system yet

What I’ve built is technically interesting. Two API sources, deduplication, filtering, a single command that ties it all together. It works and it’s useful.

But it’s not a system. It’s plumbing.

A system would mean: I run the sync, the signals get analyzed, the content plan gets updated, and the next post practically writes itself based on what the data says matters most right now.

What actually happens: I run the sync, I read the output, I think about it, I decide what to write, I write it. The scripts save me the manual searching. Claude Code saves me the analysis time. But the editorial judgment is still entirely manual.

That’s fine for now. But here’s what’s left to systematize:

Signal scoring. Right now, every signal that passes the filters gets treated equally. A 218-upvote post about COBOL disruption sits next to a 2-upvote question about Claude Agents. Some kind of weighted scoring, engagement relative to the subreddit’s baseline, recency, relevance to the pillars, would help surface the signals that actually matter.

Content plan diffing. The sync command suggests changes to the content plan, but it starts fresh every time. It doesn’t know what it recommended last sync. A diff-aware system would say “this theme has been growing for three syncs, move it up” or “this topic peaked and is fading.”

Source expansion. X and Reddit are two sources. LinkedIn, Hacker News, and niche Slack communities are where a lot of B2B SaaS practitioners actually hang out. Each source has its own API story (some easy, some not), but the architecture supports adding more.

Publish-readiness scoring. The content plan has 10 topics. Some are well-supported by signals. Others are still based on the original brainstorm. A system would flag which topics have the strongest signal backing and suggest those as the next to write.

None of this is hard. It’s just not built yet. And being honest about the gap between “cool plumbing” and “actual system” is part of documenting the process.

The takeaway

Two signal sources are better than one, but not because you get more data. You get more texture. X shows what’s being announced and discussed in real time. Reddit shows what questions people are asking and what problems they’re working through. The overlap is where the strongest content ideas live.

The plumbing works. The next step is turning it into something that actually makes editorial decisions easier, not just data collection faster.