session-logs

会话日志分析 - 搜索和分析历史会话日志,查找之前的对话内容和结果。

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "session-logs" with this command: npx skills add 534422530/laosi-session-logs

Session Logs - 会话日志分析

激活词: 会话日志 / 搜索历史 / 历史记录

功能

  • 搜索历史会话
  • 分析会话模式
  • 提取关键信息
  • 按时间过滤

Python实现

import json
import os
from datetime import datetime, timedelta
from pathlib import Path

class SessionLogs:
    def __init__(self, log_dir: str = "~/.openclaw/logs"):
        self.log_dir = Path(log_dir).expanduser()
    
    def list_sessions(self, days: int = 7) -> list:
        sessions = []
        cutoff = datetime.now() - timedelta(days=days)
        
        for file in self.log_dir.glob("*.log"):
            mtime = datetime.fromtimestamp(file.stat().st_mtime)
            if mtime > cutoff:
                sessions.append({
                    'file': file.name,
                    'modified': mtime,
                    'size': file.stat().st_size
                })
        
        return sorted(sessions, key=lambda x: x['modified'], reverse=True)
    
    def search_logs(self, keyword: str, days: int = 7) -> list:
        results = []
        for file in self.log_dir.glob("*.log"):
            try:
                with open(file, 'r', encoding='utf-8', errors='ignore') as f:
                    for i, line in enumerate(f, 1):
                        if keyword.lower() in line.lower():
                            results.append({
                                'file': file.name,
                                'line': i,
                                'content': line.strip()[:200]
                            })
            except:
                pass
        return results
    
    def extract_commands(self, log_file: str) -> list:
        commands = []
        with open(self.log_dir / log_file, 'r') as f:
            for line in f:
                if 'command:' in line.lower() or 'user:' in line.lower():
                    commands.append(line.strip())
        return commands

使用示例

logs = SessionLogs()

# 列出最近7天会话
recent = logs.list_sessions(7)
for s in recent:
    print(f"{s['modified']}: {s['file']}")

# 搜索关键词
results = logs.search_logs("Python", days=30)
for r in results:
    print(f"{r['file']}:{r['line']}: {r['content']}")

输出格式

## 会话日志

### 最近会话
| 日期 | 文件 | 大小 |
|------|------|------|
| 2026-04-28 | 2026-04-28.log | 1.2MB |
| 2026-04-27 | 2026-04-27.log | 2.3MB |

### 搜索结果 "Python"
- 2026-04-28.log:142: "用Python写个脚本..."
- 2026-04-27.log:89: "Python怎么导入模块"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Signal Intelligence Pack

LLM通用前置grounding技能组。在正式分析、判断、报告或规划之前,将问题拆解为可执行查询、选对来源、洗净证据、标清新鲜度、补上反证。内部固定串联5个独立技能:query-planner → source-router → evidence-cleaner → freshness-judge → count...

Registry Source
1440Profile unavailable
Research

Freshness Judge

LLM通用证据新鲜度判断技能。根据时间窗和证据时间信息,判断每条证据属于current/background/stale/undated/malformed哪一类。在搜索结果标准化之后、需要区分当前证据与背景证据时使用。触发条件:现实问题/新闻/政策/市场分析、需要降低"把旧材料当新材料"风险、时间敏感型任务。

Registry Source
1210Profile unavailable
Research

Counter Evidence Hunter

LLM通用反证搜索技能。围绕当前主线判断,主动寻找反例、冲突证据、翻转条件和替代路径支撑,减少单线叙事偏差。在已有主线判断后、高风险结论输出前、风险分析前使用。触发条件:需要降低幻觉和单线偏差、需要补充替代叙事证据、高风险决策前的纠偏。

Registry Source
1500Profile unavailable
Research

Evidence Cleaner

LLM通用证据清洗技能。将原始搜索结果、网页片段、OCR残片等原始材料清洗为可用证据,减少脏输入、伪实体、重复片段和错域材料对后续判断的污染。在搜索结果返回后、进入freshness判定或叙事生成前使用。触发条件:搜索结果质量差、证据量大但信噪比低、需要标准化证据格式。

Registry Source
1340Profile unavailable