human-taste

Evaluate UX and product design quality through the lens of human taste -- cognitive load, visual coherence, interaction clarity, and context fit. Use this skill whenever the user asks you to review a UI, critique a design, do a UX audit, assess aesthetic quality, check cognitive load, run a taste test on a product, evaluate user flow, or judge whether a design "feels right." Also use it when reviewing AI-generated designs or comparing design alternatives, even if the user does not explicitly mention "human taste."

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "human-taste" with this command: npx skills add alpha-mintamir/human-taste-skill/alpha-mintamir-human-taste-skill-human-taste

Human Taste

Evaluate UX and product design through human taste -- the trained judgment that detects whether a design reduces cognitive friction, feels coherent, and fits its audience.

This skill is grounded in research from cognitive psychology, HCI, and design practice. For full citations see references/research-sources.md.

Why This Matters

LLMs can generate designs, but aesthetic judgment involves empathy, cultural awareness, and pattern recognition that require human-calibrated evaluation. Research shows:

  • Users form aesthetic impressions within milliseconds (eye-tracking studies)
  • Interfaces that reduce cognitive load are perceived as more beautiful (Processing Fluency Theory)
  • Taste develops through repeated exposure and operates at a pre-conscious perceptual level
  • Good taste means choosing simplicity over mere familiarity (Hickey's Simple vs Easy)

This skill provides a structured protocol so agents can approximate that judgment systematically.

Quick Start

When asked to evaluate a design:

  1. Identify what you are evaluating -- screenshot, wireframe, live page, component, or described flow
  2. Run the rubric below across all six dimensions
  3. Produce a Human Taste Report using the output template
  4. Cite specific elements -- never give vague praise or criticism

Evaluation Rubric

Score each dimension 1-5. Anchor your score with concrete evidence from the design.

1. Cognitive Load (weight: high)

Does the design minimize unnecessary mental effort?

ScoreMeaning
1Overwhelming -- too many competing elements, no clear entry point
2Heavy -- user must work to understand the hierarchy
3Moderate -- some unnecessary complexity but functional
4Light -- clear hierarchy, minimal distractions
5Effortless -- information is exactly where you expect it

Look for: element count per view, competing focal points, label clarity, progressive disclosure, information grouping.

2. Visual Coherence (weight: high)

Does the design feel unified rather than assembled from parts?

ScoreMeaning
1Fragmented -- inconsistent spacing, colors, typography
2Patchy -- some consistency but noticeable breaks
3Adequate -- follows a system with minor deviations
4Cohesive -- strong visual rhythm, clear design system
5Seamless -- every element reinforces the whole

Look for: spacing consistency, color palette discipline, typographic scale, alignment grid, icon style unity.

3. Interaction Clarity (weight: high)

Can a user predict what happens next at every step?

ScoreMeaning
1Opaque -- controls are ambiguous, outcomes unclear
2Confusing -- some actions have surprising results
3Functional -- most flows are predictable
4Clear -- affordances are obvious, feedback is immediate
5Intuitive -- zero learning curve, flows feel inevitable

Look for: button labels, hover/focus states, loading indicators, error messages, navigation predictability, undo availability.

4. Context Fit (weight: medium)

Does the design match its audience and environment?

ScoreMeaning
1Mismatch -- tone, density, or style wrong for the audience
2Off -- partially appropriate but feels generic
3Acceptable -- reasonable for the context
4Tailored -- shows awareness of user needs and setting
5Perfect fit -- feels like it was made for exactly this audience

Look for: reading level, information density vs audience expertise, platform conventions, accessibility, cultural appropriateness.

5. Restraint (weight: medium)

Does the design know what to leave out?

ScoreMeaning
1Bloated -- every feature is visible, nothing is prioritized
2Cluttered -- too many options competing for attention
3Balanced -- reasonable feature surface
4Disciplined -- clear priorities, secondary items recede
5Minimal -- only the essential, nothing to remove

Look for: feature density, progressive disclosure, empty states, whitespace usage, hidden-by-default patterns.

6. Emotional Response (weight: low)

Does the design evoke the intended feeling?

ScoreMeaning
1Repellent -- actively unpleasant
2Flat -- no emotional register
3Neutral -- inoffensive
4Warm -- creates mild positive engagement
5Delightful -- memorable, evokes trust or joy

Look for: micro-interactions, illustration style, copy tone, color warmth, motion design, personality.

Output Template

Produce your evaluation in this format:

# Human Taste Report

**Subject:** [what was evaluated]
**Date:** [date]
**Overall Score:** [weighted average, 1-5, one decimal] / 5

## Scores

| Dimension | Score | Key Evidence |
|-----------|-------|-------------|
| Cognitive Load | X/5 | [specific observation] |
| Visual Coherence | X/5 | [specific observation] |
| Interaction Clarity | X/5 | [specific observation] |
| Context Fit | X/5 | [specific observation] |
| Restraint | X/5 | [specific observation] |
| Emotional Response | X/5 | [specific observation] |

## Strengths
- [concrete strength with evidence]
- [concrete strength with evidence]

## Issues
- **[severity: Critical/Major/Minor]**: [specific issue] -- [why it matters] -- [suggested fix]

## Verdict
[2-3 sentence summary: what works, what does not, and the single highest-impact improvement]

Weighted average formula: (CognitiveLoad*3 + VisualCoherence*3 + InteractionClarity*3 + ContextFit*2 + Restraint*2 + EmotionalResponse*1) / 14

Comparing Alternatives

When comparing two or more designs:

  1. Run the rubric on each independently
  2. Add a Comparison Table showing side-by-side scores
  3. Declare a winner per dimension and overall
  4. Explain the tradeoffs -- a lower-scoring design may still be right for a specific audience

Reviewing AI-Generated Designs

AI-generated UI often has specific taste failure modes:

  • Over-decoration -- gradients, shadows, and effects without purpose
  • Generic composition -- layouts that feel template-driven rather than content-driven
  • Inconsistent density -- mixing spacious and cramped sections
  • Missing edge states -- empty states, error states, loading states not considered
  • Surface polish without structural clarity -- looks good at first glance but confusing to use

Flag these explicitly when you detect them.

When Not to Use This Skill

  • Pure backend/API design with no user-facing component
  • Code review for logic correctness (use a code-review skill instead)
  • Accessibility audits (this skill covers taste, not WCAG compliance -- though the two overlap)

Additional Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

human-taste-code

No summary provided by upstream source.

Repository SourceNeeds Review
General

human-taste-content

No summary provided by upstream source.

Repository SourceNeeds Review
Security

Open Code Review

Scan AI-generated code for hallucinated packages, stale APIs, security anti-patterns, and over-engineering. Use when: (1) reviewing PRs with AI-generated cod...

Registry SourceRecently Updated
90Profile unavailable
Security

Nginx Config

Nginx配置生成。服务器配置、反向代理、SSL、缓存、安全加固、性能优化。Nginx config generator with reverse proxy, SSL, caching, security, optimization. Nginx、服务器、运维。

Registry SourceRecently Updated
1860Profile unavailable