competitor-monitor

Use when the user wants to set up ongoing competitor monitoring — define competitors to track, configure scanning frequency, enable change detection alerts, and establish competitive intelligence baselines.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "competitor-monitor" with this command: npx skills add indranilbanerjee/digital-marketing-pro/indranilbanerjee-digital-marketing-pro-competitor-monitor

/dm:competitor-monitor

Purpose

Set up and manage ongoing competitor monitoring. Define which competitors to track, what to monitor (content changes, pricing updates, ad activity, social mentions, SEO rankings, SERP feature ownership), how often to scan each dimension, and what alerts to trigger when significant changes are detected. This command establishes competitive intelligence baselines by capturing the current state of each competitor across all monitored dimensions, then configures recurring scans to detect and surface changes over time. Baselines serve as the reference point for all future change detection — without them, alerts have no context for what constitutes a meaningful shift versus normal fluctuation. Supports per-dimension scan frequencies so high-velocity dimensions like pricing and ads can be checked daily while slower-moving dimensions like content strategy and SEO authority are reviewed weekly or monthly. The monitoring configuration persists across sessions and powers both the competitor-alerts notification system and the share-of-voice trend tracking.

Input Required

The user must provide (or will be prompted for):

  • Competitors to track: A list of competitor names with their primary website URLs — e.g., "Acme Corp (acme.com), Beta Inc (beta.io), Gamma Labs (gammalabs.com)". Each competitor becomes a monitored entity with its own baseline profile and independent scan schedule. Minimum one competitor required, no upper limit but recommend 3-8 for manageable monitoring volume and meaningful competitive context without signal overload
  • Monitoring dimensions: Which competitive aspects to track for each competitor — content (new pages, blog posts, significant page edits, messaging changes on key pages), pricing (pricing page changes, plan restructuring, discount offers, free trial modifications), ads (new Google Ads campaigns, Meta Ad Library activity, ad copy and creative changes, new platform presence), social (mention volume, sentiment shifts, follower growth, posting frequency and engagement rates), seo (organic keyword rankings on tracked terms, domain authority changes, backlink profile shifts, new content indexation), serp (featured snippet ownership, People Also Ask presence, knowledge panel changes, AI overview citations). Select all for comprehensive coverage or choose a subset per competitor based on competitive relevance
  • Scan frequency per dimension: How often to check each dimension — daily, weekly, or monthly. Recommended defaults: daily for pricing and ads (high-velocity, time-sensitive competitive signals requiring fast response), weekly for content and SEO (meaningful changes accumulate over days, not hours), monthly for full strategic review (positioning, messaging, market stance, and competitive narrative evolution). Custom frequencies can be set per competitor per dimension for asymmetric monitoring
  • Alert thresholds and notification channel: What constitutes a significant change worth alerting on per dimension, and where to send alerts — Slack channel name (e.g., #competitor-intel) or email address. Thresholds can be qualitative ("any pricing change") or quantitative ("ranking drop of more than 5 positions on any tracked keyword", "social mention volume exceeding 2x baseline"). If not specified, sensible defaults are applied per dimension based on typical competitive volatility patterns
  • Tracked keywords (optional): Specific keywords to monitor for SEO and SERP dimension tracking — brand terms, category head terms, product-specific terms, and high-intent commercial queries where competitor visibility matters most. If omitted, keywords are inferred from brand context, competitor content overlap analysis, and any existing keyword research data

Process

  1. Load brand context: Read ~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand positioning, competitive landscape context, target market definitions, and industry vertical. Also check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load competitive sensitivity rules and any competitor-specific monitoring preferences. Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.
  2. Validate competitor URLs and collect initial baseline data: For each competitor, verify the provided URL resolves and identify the correct root domain with any relevant subdomains. Capture the current state across all selected monitoring dimensions — website key pages (homepage, pricing, product, about, blog) with full meta tag snapshots, page titles, H1 headings, and core messaging blocks; pricing page structure including plan names, price points, feature lists, and tier differentiation; social media profiles across LinkedIn, Twitter/X, Facebook, Instagram, and YouTube with current follower counts, posting frequency, and recent engagement metrics; current ad activity from Google Ads Transparency Center and Meta Ad Library including active campaigns, ad copy samples, and creative formats; and organic search visibility on tracked keywords with current positions, estimated traffic, and SERP feature ownership. This comprehensive baseline snapshot becomes the reference point against which all future changes are measured.
  3. Save baselines via competitor-tracker.py: Execute competitor-tracker.py save-baseline with the collected baseline data for each competitor. Each baseline record includes the competitor name, primary URL, dimension-specific data snapshots with structured fields for comparison, collection timestamp, data source references, and data completeness indicators (marking any dimensions where data collection was partial or unavailable). Baselines are stored per-brand at ~/.claude-marketing/brands/{slug}/competitors/ so multiple brands can track overlapping competitors with fully independent monitoring contexts and separate change detection histories.
  4. Configure monitoring schedule per dimension: Set up the recurring scan schedule based on user-specified or default frequencies. Daily scans for pricing and ad activity — changes in these dimensions are time-sensitive and may require immediate competitive response such as counter-offers or bid adjustments. Weekly scans for content changes and SEO rankings — meaningful shifts accumulate over several days and weekly cadence provides sufficient detection speed without excessive scanning overhead. Monthly full strategic reviews covering competitive positioning analysis, messaging evolution, market stance assessment, and competitive narrative trajectory. Each schedule entry records the competitor name, dimension, frequency, next scan date, data collection method, and expected scan duration. Schedule persists across sessions and powers automated scanning when integrated with cron or scheduling services.
  5. Set up alert rules: Define what constitutes a significant change per dimension and configure the notification routing for each alert type. Content alerts trigger on new page publication or significant edits to key pages (homepage, pricing, product pages) where the content diff exceeds the similarity threshold — minor copy tweaks are filtered out while messaging pivots and new feature announcements surface. Pricing alerts trigger on any detectable change to pricing page content, plan names, price points, or tier structure — pricing is binary-sensitive so any change is noteworthy. Ad alerts trigger on new campaign detection in Google Ads Transparency Center or Meta Ad Library, or major creative rotation exceeding 50% new creatives. Social alerts trigger when mention volume exceeds 2x the rolling 30-day baseline average or when sentiment score shifts more than 0.3 points. SEO alerts trigger on ranking changes greater than 5 positions on any tracked keyword. SERP alerts trigger on featured snippet ownership changes, knowledge panel modifications, or People Also Ask presence shifts on tracked queries. Route all alerts to the specified Slack channel via send-notification or to email, with urgency tiering based on competitive impact assessment.
  6. Create initial competitive intelligence brief: Synthesize the collected baseline data into a structured competitive intelligence summary for each monitored competitor. Cover current market positioning and brand narrative, key messaging themes and value proposition differentiation, pricing strategy with tier structure and competitive pricing gaps, content focus areas with publishing cadence and topic coverage map, social media presence with platform-specific strengths and engagement benchmarks, SEO authority with domain metrics and keyword overlap heat map showing where competitors directly contest the brand's visibility, and areas of direct competitive overlap where monitoring should be most vigilant. Include a competitive threat assessment ranking each competitor by overall threat level (primary, secondary, emerging) with reasoning, and recommended watch priorities based on the brand's strategic objectives and market position.

Output

A structured competitor monitoring setup containing:

  • Competitor profiles with baselines saved: Per-competitor profile cards showing name, URL, all captured baseline metrics across every monitored dimension, data completeness indicators, data collection timestamp, and storage confirmation with file path from competitor-tracker.py
  • Monitoring schedule configuration: Table of all scheduled scans — competitor name, dimension, frequency (daily/weekly/monthly), next scheduled scan date, data collection method, and estimated scan duration for each entry. Total scan volume summary showing scans per day, per week, and per month across all competitors
  • Alert rules defined: Per-dimension alert trigger conditions with specific thresholds (quantitative where applicable, qualitative otherwise), urgency tier assignment (critical/warning/info), notification channel and routing configuration, and estimated alert frequency per dimension based on historical competitive activity patterns observed during baseline collection
  • Initial competitive intelligence brief: Structured narrative summary per competitor covering market positioning, pricing strategy, content approach, social presence, SEO authority, and competitive overlap areas — with competitive threat ranking (primary/secondary/emerging), strategic watch recommendations, and key questions the monitoring should help answer over time
  • Next scan dates per competitor per dimension: Calendar view of all upcoming scans organized by date, so the user knows exactly when the first change detection pass will run for each competitor-dimension combination and when to expect the first monitoring results

Agents Used

  • competitor-intelligence — Competitor analysis and research across all monitoring dimensions including website auditing, pricing intelligence, ad activity scanning, and social profile benchmarking. Initial baseline data collection from public websites, ad libraries, and social platforms. Monitoring schedule configuration with per-dimension frequency optimization based on competitive volatility assessment. Alert rule definition with threshold calibration informed by competitive activity patterns and dimension-specific noise levels. Change detection framework setup with significance criteria tuned to minimize false positives while catching meaningful competitive shifts. Competitive intelligence brief synthesis with threat assessment, strategic watch recommendations, and competitive narrative interpretation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

pdf-report

No summary provided by upstream source.

Repository SourceNeeds Review
General

reputation-management

No summary provided by upstream source.

Repository SourceNeeds Review
General

webinar-plan

No summary provided by upstream source.

Repository SourceNeeds Review
General

video-script

No summary provided by upstream source.

Repository SourceNeeds Review