fuck

Fuck - Task Execution Recovery

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "fuck" with this command: npx skills add jayli/fuck-skill/jayli-fuck-skill-fuck

Fuck - Task Execution Recovery

Overview

When the user says "fuck" or expresses frustration, do not ask the user what went wrong. Stop immediately, independently check the task list against what was actually executed, identify any omissions and complete them, until the user is satisfied.

Core Principles: Don't ask, don't argue, check yourself, fix yourself.

Workflow

digraph fuck_workflow { rankdir=TB; "User Trigger" [shape=box, style=filled, fillcolor=lightyellow]; "STOP - Don't Ask" [shape=diamond]; "Check Task List" [shape=box]; "Found Omissions?" [shape=diamond]; "Execute Missing Items" [shape=box]; "Verify" [shape=box]; "User Satisfied?" [shape=diamond]; "Complete" [shape=doublecircle];

"User Trigger" -> "STOP - Don't Ask";
"STOP - Don't Ask" -> "Check Task List";
"Check Task List" -> "Found Omissions?";
"Found Omissions?" -> "Execute Missing Items" [label="Yes"];
"Found Omissions?" -> "User Satisfied?" [label="No"];
"Execute Missing Items" -> "Verify";
"Verify" -> "User Satisfied?";
"User Satisfied?" -> "Complete" [label="Yes"];
"User Satisfied?" -> "Check Task List" [label="No, re-check"];

}

Immediate Execution (Don't Ask)

First sentence after trigger:

I understand there's an issue. Let me check and fix it immediately.

Then immediately do:

  1. STOP - Don't Ask
  • ❌ Don't ask "What happened?"

  • ❌ Don't ask "What didn't work as expected?"

  • ❌ Don't wait for user to provide information

  • ✅ Review conversation history yourself, identify the task list

  1. Check Task List Yourself

Check the following items (don't miss any):

Check Item Specific Question

File Creation Were the files that should be created actually created?

File Modification Was the content that should be modified actually changed?

Command Execution Were the commands that should be executed actually run?

Tool Invocation Were the tools/commands that should be called actually invoked?

Step Completion Were all steps completed?

Verification Was verification performed after execution?

  1. Identify and Execute Missing Items

❌ Omission: [specific item] → Execute immediately ✓ Completed

  1. Verify and Confirm

Executed missing items:

  • Item 1 ✓
  • Item 2 ✓

Please confirm if there are any other issues.

Common Omission Checklist

When performing checks, go through this list:

File Operations

  • Were the files that should be created actually created? (check paths, filenames)

  • Were the files that should be modified actually changed? (check content, location)

  • Are file permissions correct?

  • Is file content complete?

Command Execution

  • Were the commands that should be executed actually run?

  • Are command parameters correct?

  • Is command output as expected?

  • Is command execution order correct?

Code Modifications

  • Was the code that should be changed actually modified?

  • Is the modification location correct?

  • Was testing performed after modification?

  • Are there any side effects?

Task Completion

  • Are all subtasks completed?

  • Are there any operations that were left halfway?

  • Is there any follow-up processing that wasn't handled?

Prohibited Behaviors

Absolutely Never:

Wrong Behavior Correct Approach

"I thought you already..." Clearly verify each task

"It might be because..." Execute first, then analyze

"Let me try something else" Complete current task first

Skip verification Verify after each operation

Execute multiple items at once Execute item by item, confirm each

Say "it's simple" Acknowledge the problem, check seriously

Example Dialogues

Scenario 1: File Not Created

User: fuck where's the file I asked you to create?

AI:

I understand you're frustrated. Let me stop and check.

The file you asked me to create:

  1. CLAUDE.md

Let me check if it was created...

❌ I found the issue: I wrote the content for CLAUDE.md but didn't actually create the file.

This is an omission. I'll create it now. [Perform creation operation]

Created. Please confirm if there are any other omissions?

Scenario 2: Command Not Executed

User: Where's the test I asked you to run?

AI:

You asked me to run tests. Let me check...

Your request:

  1. Run unit tests

I'm checking execution history...

❌ Omission: I didn't execute the npm test command.

I'll execute it now: [Execute npm test]

Test results: [output]

Is there anything else that needs to be done?

Mindset

When the user says "fuck":

  • Don't be defensive - The user is not targeting you, they're targeting the problem

  • Don't explain - Explanations don't solve problems

  • Don't skip - Skipping makes problems worse

  • Check seriously - Debug your execution like debugging code

  • Execute missing items - Fill in what's missing

  • Until satisfied - Consider it done only when the user is satisfied

Core Principles

  1. Acknowledge problem > Explain reasons
  2. Thorough check > Quick fix
  3. Execute missing items > Work around problems
  4. User satisfied > Task completed

Quick Reference

When this skill is triggered:

  • STOP - Stop

  • Acknowledge - I understand there's an issue

  • List - Original task list

  • Check - Verify each item against execution

  • Identify - Omissions/failures

  • Execute - Perform missing operations

  • Verify - Confirm results

  • Confirm - User satisfied

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

use-codex-llm

No summary provided by upstream source.

Repository SourceNeeds Review
General

planify

No summary provided by upstream source.

Repository SourceNeeds Review
General

claude-oil

No summary provided by upstream source.

Repository SourceNeeds Review
General

health

No summary provided by upstream source.

Repository SourceNeeds Review