A case study in agentic workflow design, LLM product integration, and enterprise PM tooling
PMs at Precisely spend 5–10 hours per week on manual competitive research: reading blogs, scanning release notes, and synthesizing findings before roadmap discussions. PM-Intel is an agentic system that automates this workflow end-to-end: from scraping competitor releases to generating PRD counter-requirements using Claude Sonnet. The result is 15-minute weekly competitive reviews instead of 5-hour manual sessions, while ensuring every PM on the team works from the same shared intelligence picture.
Precisely competes in the data integrity market against Informatica, Talend, Collibra, Ataccama, and Experian. Each of these companies ships frequently. New releases, blog posts, and documentation updates arrive every week. Keeping up with all of them is a structural problem, not a willpower problem.
There is no single feed. PMs monitor 20+ sources manually: competitor blogs, release notes, LinkedIn announcements, G2 review changes, and partner press releases. Each source has a different cadence and format, making aggregation difficult without dedicated tooling.
Competitor feature launches often go unnoticed for weeks. By the time a relevant update surfaces in a roadmap discussion, it's too late to react in the current quarter. The window between a competitor shipping and Precisely responding is consistently too wide.
Even when PMs find a relevant update, converting it into an actionable internal requirement ("we should build X because competitor Y launched Z") requires significant context-switching. This step alone consumes 1–2 hours per finding, which discourages thorough competitive tracking.
PM-Intel was designed around two primary user archetypes observed across enterprise product teams at software companies of Precisely's size.
Alex Chen
Senior Product Manager, Data Quality
Goal: Stay ahead of Informatica's roadmap moves so he can position Precisely's data quality suite competitively
Pain: Spends every Sunday evening reading Informatica's release notes before Monday's product sync
“I need a way to know what competitors shipped this week without spending my whole Sunday on it.”
Priya Patel
VP of Product Management
Goal: Ensure the entire PM team is aligned on competitive landscape without scheduling weekly sync meetings
Pain: Competitive intel is siloed. Each PM tracks different competitors in their own format with no shared baseline
“I want one place where the whole team sees the same competitive picture.”
PM-Intel is built around three epics that map directly to the three sub-problems identified above.
// Output schema
{
competitor: string,
feature_name: string,
description: string,
gap_analysis: string,
priority: "High" | "Medium" | "Low",
confidence: number
}
The system's key insight is that the output of competitive analysis is a counter-requirement that feeds directly back into product planning. This closes the loop from raw market signal to internal product action, automatically.
The loop closes: better requirements → better product → stronger competitive position
Instead of competitive intelligence being a one-time report that gets forgotten in a Confluence page, PM-Intel makes it a continuous input to the product process. Every Monday morning, the team starts with a shared baseline, and every brief card links directly to a one-click PRD generator.
Three metrics define success at launch. These were chosen to be measurable without instrumentation complexity during the prototype phase.
| Metric | Baseline | Target | How Measured |
|---|---|---|---|
| Weekly research time per PM | 5 hours | 15 minutes | Self-reported time tracking |
| Counter-requirement clicks | 0 | 10+ / week | Click event on "Draft Counter-Requirement" |
| Pipeline success rate | N/A | 95% | Scraper job success/failure logs |
$201,875
in recovered productivity annually, for a team of 10 PMs
This is a conservative estimate. It excludes additional value from faster competitive reaction (better roadmap prioritization), reduced meeting overhead (no weekly competitive sync needed), and the compounding benefit of a shared competitive baseline that prevents duplicated research across the team. The directional signal is that the ROI justifies a full-time PM tooling investment, not just a prototype.
┌─────────────────────────────────────────────────────────────┐ │ PM-Intel Agent │ ├──────────────┬──────────────────────┬───────────────────────┤ │ Scraping │ AI Analysis │ Frontend │ │ Layer │ Layer │ Layer │ │ │ │ │ │ APScheduler │ Claude Sonnet │ Next.js 14 App Router│ │ httpx │ Structured JSON │ Tailwind CSS │ │ BeautifulSoup│ Feature Extraction │ Shadcn UI │ │ │ Gap Analysis │ React │ │ │ PRD Generation │ │ ├──────────────┴──────────────────────┴───────────────────────┤ │ Data Layer │ │ Supabase (PostgreSQL) + In-Memory Mock │ └─────────────────────────────────────────────────────────────┘
Prototype Scope
The live demo uses curated sample data to showcase the analysis and PRD generation workflows end-to-end. The AI Analysis Layer and Draft Counter-Requirement features make real Claude API calls on every interaction. In a production deployment, the Python scraping backend would replace the sample data by pulling live content from competitor RSS feeds and blog pages on a weekly schedule. The architecture for this layer is fully implemented in the repo.
The hardest part wasn't the scraping. It was getting the gap analysis prompt to be opinionated without hallucinating. Early versions would confidently describe Precisely capabilities that don't exist. Adding a grounded context block about Precisely's actual platform (data quality, address validation, location intelligence, geocoding, data enrichment, governance) fixed most of that.
The "Draft Counter-Requirement" feature ended up having the most product leverage. A PM can go from seeing a competitor's feature to a ready-to-paste Jira ticket in under 30 seconds. That's the kind of improvement users notice immediately, and it's what I'd focus on validating first in a real rollout.
If I were to continue this project, the highest-ROI next step would be building a feedback loop: tracking which generated requirements actually make it into the roadmap, and using that signal to improve the prompt. Right now the system outputs intelligence but doesn't learn from what gets acted on. Closing that loop would make PM-Intel a genuinely adaptive system over time.
Ready to explore the live product?
Launch the Dashboard→