user-research

End-user research workflow for validating behavior, needs, and pain points with representative evidence. Use when product decisions need direct user evidence through interviews, usability studies, or observed usage; do not use for internal stakeholder governance decisions.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "user-research" with this command: npx skills add kentoshimizu/sw-agent-skills/kentoshimizu-sw-agent-skills-user-research

User Research

Overview

Use this skill to gather credible evidence from representative users and reduce product decision risk.

Scope Boundaries

  • Product assumptions require validation before high-cost implementation.
  • Teams need evidence for unmet needs, friction points, or mental-model mismatch.
  • Competing solution options need user-behavior evidence for prioritization.

Templates And Assets

  • Research plan template:
    • assets/user-research-plan-template.md
  • Research evidence log:
    • assets/user-research-evidence-log-template.md

Inputs To Gather

  • Decision questions and hypotheses to test.
  • Target segments, inclusion/exclusion criteria, and sample constraints.
  • Existing analytics, support data, and prior research artifacts.
  • Privacy and consent requirements for research operations.

Deliverables

  • Research plan with method choice and participant strategy.
  • Evidence-backed findings with confidence and sampling limitations.
  • Decision implications and recommended next actions.

Workflow

  1. Define decision-linked hypotheses and success/failure criteria.
  2. Choose method by evidence gap, timeline, and risk tolerance.
  3. Recruit representative participants and document sampling risks.
  4. Run sessions with consistent prompts and structured note capture.
  5. Distinguish observed behavior from interpretation.
  6. Synthesize findings, confidence, and implications for product decisions.

Quality Standard

  • Findings are traceable to raw evidence.
  • Sampling limitations and confidence are explicit.
  • Sensitive information handling follows privacy requirements.
  • Outputs clearly state what should change and why.

Failure Conditions

  • Stop when participant sample is clearly non-representative for the decision.
  • Stop when consent or data-retention requirements are not met.
  • Escalate when findings are contradictory and decision cannot proceed safely.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

risk-requirements-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

architecture-tradeoff-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

algorithm-complexity-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

ux-research-synthesis

No summary provided by upstream source.

Repository SourceNeeds Review