hackathon-code-implementer

Provide structured implementation guidance, code scaffolds, and done criteria for a hackathon project task.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "hackathon-code-implementer" with this command: npx skills add bernieweb3/hackathon-ai-devkit/bernieweb3-hackathon-ai-devkit-hackathon-code-implementer

hackathon-code-implementer

Goal

Provide structured implementation guidance for a hackathon project task, including code patterns, integration strategies, and shortcuts appropriate for prototype speed.


Trigger Conditions

Use this skill when:

  • A specific task from hackathon-task-planner is about to be implemented
  • Implementation guidance, code scaffolds, or shortcut patterns are needed
  • A developer is blocked and needs a concrete starting point
  • The task budget needs to be assessed for [HIGH-RISK] overrun potential
  • Invoked once per task; iterate through the full task list from hackathon-task-planner

Inputs

InputTypeRequiredDescription
task_titlestringYesTask being implemented (from hackathon-task-planner)
task_descriptionstringYesDetailed description of what the task must achieve
tech_stackstring[]YesTechnologies in use
mvp_demo_flowobject[]YesDemo flow steps from hackathon-scope-cutter
time_budget_hoursnumberYesHours allocated to this task
existing_code_contextstringNoRelevant existing code snippets or file structure
fake_okbooleanNoWhether hardcoded/simulated data is acceptable (default: false)

Outputs

OutputDescription
implementation_planOrdered list of sub-steps to complete the task
code_scaffoldsKey code snippets, patterns, or stubs to start from
integration_pointsWhere this task connects to other components
shortcutsHackathon-appropriate shortcuts (mocks, hardcoding, libraries)
gotchasCommon failure modes to avoid
done_criteriaConditions that signal the task is complete

Rules

  1. Prioritize working code over clean code; note tech debt explicitly.
  2. Recommend existing libraries over custom implementations whenever possible.
  3. If fake_ok is true, provide mock/stub patterns alongside real implementations.
  4. Keep code_scaffolds minimal — entry points only, not full implementations.
  5. done_criteria must be observable and verifiable within time_budget_hours.
  6. Flag any sub-step that risks taking longer than 50% of time_budget_hours as [HIGH-RISK].
  7. Do not generate production-quality architecture; optimize for demo completeness.

Output Format

implementation_plan:
  - step: <number>
    action: "<what to do>"
    risk: "<[HIGH-RISK]|normal>"

code_scaffolds:
  - label: "<purpose>"
    language: "<language>"
    snippet: |
      <code>

integration_points:
  - component: "<name>"
    connection: "<how this task connects>"

shortcuts:
  - shortcut: "<description>"
    trade_off: "<what is sacrificed>"

gotchas:
  - "<pitfall>"

done_criteria:
  - "<verifiable condition>"

Example

Input:

task_title: "Implement session memory read/write (T-03)"
task_description: "Store and retrieve a rolling summary of the user's emotional state across chat sessions using Redis."
tech_stack: ["Python", "FastAPI", "Redis", "OpenAI API"]
mvp_demo_flow:
  - step: 2
    action: "User describes recurring work stress"
    outcome: "AI references last week's similar conversation from memory"
time_budget_hours: 3
fake_ok: false

Output:

implementation_plan:
  - step: 1
    action: "Install redis-py; add Redis client singleton to app startup"
    risk: "normal"
  - step: 2
    action: "After each GPT-4 response, call GPT-4 to generate a 2-sentence session summary and write to Redis key user:{id}:summary"
    risk: "normal"
  - step: 3
    action: "On each new conversation, prepend the stored summary to the system prompt"
    risk: "[HIGH-RISK] — summary injection may push context window near limit; keep summary ≤100 tokens"
  - step: 4
    action: "Test by running two sessions and verifying AI references prior context"
    risk: "normal"

code_scaffolds:
  - label: "Redis memory read/write"
    language: "python"
    snippet: |
      import redis
      r = redis.Redis(host="localhost", port=6379, decode_responses=True)

      def get_memory(user_id: str) -> str:
          return r.get(f"user:{user_id}:summary") or ""

      def save_memory(user_id: str, summary: str):
          r.set(f"user:{user_id}:summary", summary, ex=86400)

integration_points:
  - component: "OpenAI chat endpoint (T-02)"
    connection: "Prepend get_memory() result to system prompt on every request"

shortcuts:
  - shortcut: "Use a single hardcoded user_id='demo' for the hackathon"
    trade_off: "No real multi-user support; acceptable for single-demo session"

gotchas:
  - "Redis not running on demo machine → add a startup health check with a clear error message"
  - "Summary growing too long → hard-cap at 150 tokens before injection"

done_criteria:
  - "Starting a new chat session shows AI referencing content from the previous session"
  - "Redis key exists and contains non-empty summary after first session ends"

Context Files

Knowledge Base

  • knowledge/hackathon-reference-architecture.md
  • knowledge/hackathon-tools.md
  • knowledge/hackathon-mvp-strategy.md
  • knowledge/hackathon-common-failures.md

Playbooks

  • playbooks/hackathon-workflow.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

hackathon-event-parser

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

hackathon-scope-cutter

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

hackathon-judge-simulator

No summary provided by upstream source.

Repository SourceNeeds Review
hackathon-code-implementer | V50.AI