apple-voice-memos

Extract and process transcripts from Apple Voice Memos synced via iCloud. Use when the user wants to access, read, or summarize voice memos.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "apple-voice-memos" with this command: npx skills add jessedc/claude-apple-voice-memos-skill/jessedc-claude-apple-voice-memos-skill-apple-voice-memos

Apple Voice Memos

Extract and process transcripts from Apple Voice Memos synced via iCloud.

Prerequisites

Voice Memos must be synced with iCloud on macOS.

Tools

This skill includes two scripts in its scripts/ directory:

  • extract-apple-voice-memos-metadata — Queries the CloudRecordings.db SQLite database (read-only) and outputs CSV with columns: title, date, duration, path. Returns the 30 most recent recordings.
  • extract-apple-voice-memos-transcript — Extracts the embedded transcript from a .m4a file's tsrp atom. Outputs timestamped text with filler words removed, intelligent line breaks, and paragraph breaks at natural pauses.

Step 1: Select a voice memo

Run the metadata script to list recent recordings:

python3 scripts/extract-apple-voice-memos-metadata

Present the results as a numbered list showing title, date, and duration. Ask the user which memo they'd like to work with.

Error handling:

  • "Database not found" → Voice Memos iCloud sync is not enabled on this Mac.

Step 2: Extract the transcript

Run the transcript script with the path value from the selected recording:

python3 scripts/extract-apple-voice-memos-transcript "<FILENAME>.m4a"

Present the timestamped transcript to the user.

Error handling:

  • "tsrp atom not found" → This recording does not have an embedded transcript. Apple generates transcripts on-device and not all recordings will have one.
  • File not found → The recording file may not have synced to this Mac yet.

Step 3: Process the transcript

Read PROMPT.md from this skill's directory. Append the transcript after the ## Transcript heading. Send the complete prompt and transcript to a subagent with fresh context for processing.

The subagent will produce a structured markdown document with narrative summary, detailed notes, asides, and action items. Present this output to the user.

After presenting the output, ask the user if they'd like to save it as a markdown file. Suggest a filename in the format YYYY-MM-DD-slugified-title.md derived from the memo's title and date (e.g., 2026-02-04-the-soul-of-a-new-machine.md). Save to the current working directory by default. The user may accept, provide a different name or path, request adjustments to the content first, or skip saving.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ll-feishu-audio

飞书语音交互技能。支持语音消息自动识别、AI 处理、语音回复全流程。需要配置 FEISHU_APP_ID 和 FEISHU_APP_SECRET 环境变量。使用 faster-whisper 进行语音识别,Edge TTS 进行语音合成,自动转换 OPUS 格式并通过飞书发送。适用于飞书平台的语音对话场景。

Archived SourceRecently Updated
General

test_skill

import json import tkinter as tk from tkinter import messagebox, simpledialog

Archived SourceRecently Updated
General

51mee-resume-profile

简历画像。触发场景:用户要求生成候选人画像;用户想了解候选人的多维度标签和能力评估。

Archived SourceRecently Updated
General

51mee-resume-parse

简历解析。触发场景:用户上传简历文件要求解析、提取结构化信息。

Archived SourceRecently Updated