data-quality

Diagnose and fix data quality problems in datasets. Use when working with dirty data, finding duplicates, handling missing values, detecting outliers/anomalies, validating constraints (functional dependencies, referential integrity), profiling datasets, or cleaning data for analysis or ML. Covers the full data quality lifecycle - define, detect, clean, measure.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "data-quality" with this command: npx skills add masterkram/data-quality-skill/masterkram-data-quality-skill-data-quality

Data Quality Skill

Systematic approach to diagnosing and fixing data quality problems.

Data Quality Process

Define & Identify → Detect & Quantify → Clean & Rectify → Measure & Verify
  1. Define: Understand data context, business rules, quality requirements
  2. Detect: Profile data, find glitches (missing, duplicates, outliers, violations)
  3. Clean: Apply appropriate repair strategies
  4. Measure: Validate repairs, quantify improvement

Quick Reference

ProblemScriptKey Function
Data overviewdata_profiling.pyprofile_dataframe(df)
Find quality issuesdata_profiling.pydetect_glitches(df)
Missing valuesmissing_data.pyanalyze_missing(df)
Imputationmissing_data.pyimpute_mean/median/regression()
Duplicatesduplicate_detection.pyfind_duplicates(df, cols)
Deduplicationduplicate_detection.pydeduplicate(df, cols)
Outliersanomaly_detection.pydetect_anomalies(df)
Constraint checkconstraint_checking.pyvalidate_constraints(df, rules)
String matchingsimilarity_metrics.pyjaro_winkler_similarity()

Workflow

Step 1: Profile the Data

from scripts.data_profiling import profile_dataframe, detect_glitches, generate_quality_report

# Quick overview
print(generate_quality_report(df))

# Detailed profile
profile = profile_dataframe(df)

# Find issues
glitches = detect_glitches(df)

Step 2: Analyze Specific Issues

Missing Data:

from scripts.missing_data import analyze_missing, test_mcar

analysis = analyze_missing(df)
# Check if safe to delete rows
mcar_test = test_mcar(df, 'column_with_missing', ['other_cols'])

Duplicates:

from scripts.duplicate_detection import find_duplicates, cluster_duplicates

matches = find_duplicates(df, ['name', 'email'], threshold=0.85)
clusters = cluster_duplicates(matches)

Outliers:

from scripts.anomaly_detection import detect_anomalies, iqr_outliers

# Multi-column summary
anomalies = detect_anomalies(df, method='iqr')

# Single column detail
result = iqr_outliers(df, 'price', multiplier=1.5)

Constraints:

from scripts.constraint_checking import validate_constraints

constraints = [
    {'type': 'unique', 'columns': ['id']},
    {'type': 'not_null', 'columns': ['name', 'email']},
    {'type': 'fd', 'determinant': ['id'], 'dependent': ['name']},
    {'type': 'domain', 'column': 'age', 'min_value': 0, 'max_value': 150},
]
results = validate_constraints(df, constraints)

Step 3: Clean the Data

Handle Missing:

from scripts.missing_data import impute_median, impute_regression, listwise_deletion

# Simple: median for numeric
df_clean = impute_median(df, 'age')

# Better: regression-based
df_clean = impute_regression(df, 'income', ['age', 'education'])

# If MCAR confirmed
df_clean = listwise_deletion(df)

Remove Duplicates:

from scripts.duplicate_detection import deduplicate

df_clean, summary = deduplicate(
    df, 
    columns=['name', 'email', 'address'],
    threshold=0.8,
    merge_strategy='most_complete'
)
print(f"Reduced from {summary['original_rows']} to {summary['final_rows']} rows")

Handle Outliers:

# Cap extreme values
q01, q99 = df['col'].quantile([0.01, 0.99])
df['col'] = df['col'].clip(q01, q99)

# Or remove
df_clean = df[~detect_anomalies(df)['col']['outlier_indices']]

Step 4: Validate

Re-run profiling and constraint checks on cleaned data to verify improvements.

References

For deeper understanding:

Key Concepts

Data Quality = Fit for Use

  • Free of defects
  • Has features needed for the task
  • Right information, right place, right time

Missing Data Mechanisms:

  • MCAR: Missing Completely At Random (safe to delete)
  • MAR: Missing At Random (imputation may work)
  • MNAR: Missing Not At Random (most problematic)

Constraints:

  • Functional Dependency: X → Y means X uniquely determines Y
  • Referential Integrity: foreign keys reference valid primary keys
  • Domain Constraints: values within allowed set/range

Entity Resolution:

  • Blocking reduces O(n²) to O(n·window)
  • Similarity metrics: Jaro-Winkler (names), Levenshtein (typos), Jaccard (sets)
  • Cluster by transitive closure, merge by strategy

Similarity Metrics Comparison

MetricBest ForExample
Jaro-WinklerNames, short strings"Robert" vs "Rupert"
LevenshteinTypos, edit distance"recieve" vs "receive"
JaccardToken/word comparison"John Doe" vs "Doe, John"
Q-gramFuzzy substring matchingPartial matches

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

data-quality

No summary provided by upstream source.

Repository SourceNeeds Review
General

data-quality

No summary provided by upstream source.

Repository SourceNeeds Review
General

data-quality

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

data-quality

No summary provided by upstream source.

Repository SourceNeeds Review