Agent Governance Assistant
Overview
A comprehensive AI-powered framework for governing enterprise AI agents — from audit trails and policy enforcement to regulatory compliance and risk reporting. As enterprise AI agents (Microsoft Agent 365, Copilot Studio, custom agents) proliferate, governance has become the #1 blocker to adoption. This skill bridges the gap between AI capability and enterprise control.
Title
Enterprise AI Agent Governance Framework — Audit · Secure · Comply
Triggers
- "agent governance" / "AI agent管理" / "代理治理"
- "enterprise AI compliance" / "企业AI合规"
- "shadow AI detection" / "影子AI排查"
- "AI policy enforcement" / "AI策略执行"
- "agent audit trail" / "代理审计日志"
- "Microsoft Agent 365 governance" / "Agent 365治理"
- "AI risk report" / "AI风险报告"
- "Copilot Studio compliance" / "Copilot合规"
- "China AI regulation" / "中国AI监管"
- "CBIRC AI guidance" / "银保监会AI指引"
Workflow
Phase 1 — Agent Inventory Discovery
Step 1.1: Scan for Active AI Agents
Generate a structured inventory of all AI agents in the enterprise environment.
Input required:
- List of known AI platforms in use (e.g., Microsoft 365 Copilot, Salesforce Einstein, custom LangChain agents, RPA bots)
- Department ownership mapping
- API endpoints or integration points
Output: Agent Inventory Table
| Agent ID | Platform | Owner | Department | Capabilities | Data Access Level | Last Active |
|---|---|---|---|---|---|---|
| AG-001 | Microsoft Agent 365 | IT Admin | Finance | Email drafting, meeting prep | Full mailbox | 2026-05-07 |
Step 1.2: Classify Agent Risk Level
Assign risk tier (Low / Medium / High / Critical) based on:
- Data sensitivity (PII, financial, health, IP)
- External interaction (internet, customers, third parties)
- Autonomy level (advisory only → full automation)
- Regulatory exposure (CBIRC, CFCA, personal information protection)
Risk Classification Matrix:
| Tier | Criteria | Example | Audit Frequency |
|---|---|---|---|
| Critical | Customer-facing + financial data + high autonomy | AI underwriting agent | Weekly |
| High | Internal + sensitive data + medium autonomy | AI claims processor | Monthly |
| Medium | Internal + general data + advisory only | AI meeting summarizer | Quarterly |
| Low | Internal + no sensitive data | AI email categorizer | Bi-annual |
Phase 2 — Policy Framework Design
Step 2.1: Define Governance Policies
Generate tailored governance policies based on enterprise type and regulatory context.
For China Financial Institutions (CBIRC/CFCA):
POLICY: CFCA-AI-001 — Agent Data Minimization
All AI agents must process only minimum necessary personal data.
Agents cannot retain PII beyond the transaction completion window.
Annual data audit required.
POLICY: CBIRC-AI-007 — Model Transparency
All AI-assisted decisions in underwriting/claims must provide
human-override capability and explainability documentation.
POLICY: AI-ENTERPRISE-003 — Agent Registration
All production AI agents must be registered in the Enterprise
Agent Registry with documented purpose, data scope, and owner.
Unregistered agents are prohibited from accessing customer data.
Step 2.2: Policy Compliance Checker
For each registered agent, evaluate against all applicable policies.
Input: Agent inventory + policy list Output: Compliance gap matrix with severity scores
Phase 3 — Shadow AI Detection
Step 3.1: Identify Unauthorized Agent Usage
Scan for signs of shadow AI — employees using personal AI tools on corporate data.
Detection indicators:
- Third-party AI API calls from corporate networks (non-approved domains)
- AI tool usage logs in DLP (Data Loss Prevention) systems
- Browser extensions accessing corporate APIs
- Unsanctioned Zapier/Make/n8n workflows connecting to company data
Output: Shadow AI Exposure Report
| Finding | Risk Level | Data at Risk | Recommended Action |
|---|---|---|---|
| Employee using free ChatGPT API for customer email drafting | CRITICAL | Customer PII + contract terms | Immediate block + compliance training |
| Unsanctioned n8n workflow syncing CRM to personal AI tool | HIGH | Contact data + deal values | Replace with approved integration |
Phase 4 — Audit Trail & Reporting
Step 4.1: Generate Governance Audit Report
Produce a structured audit report for internal risk committees and external regulators.
Report Sections:
- Executive Summary (1 page)
- Agent Inventory & Risk Classification
- Policy Compliance Scorecard
- Shadow AI Findings
- Open Risks & Remediation Roadmap
- Appendix: Agent Decision Logs (sample)
Step 4.2: Generate Regulatory Filing
Format findings as a CBIRC/CFCA-compliant governance disclosure.
Input / Output Examples
Example 1: Enterprise Agent Audit
Input:
We have 12 AI agents across Finance, Claims, and Sales departments.
Recent CBIRC guidance requires quarterly AI governance reporting.
We discovered 3 employees using personal AI tools for work tasks.
Output:
## Phase 1: Agent Inventory
[Structured table of 12 agents with risk tiers]
## Phase 2: Policy Gap Analysis
- 4 agents non-compliant with CFCA-AI-001 (data minimization)
- 2 agents lack human-override capability (CBIRC-AI-007)
- 6 agents unregistered (AI-ENTERPRISE-003 violation)
## Phase 3: Shadow AI Findings
[Shadow AI report with 3 findings]
## Phase 4: Compliance Score — 58/100
Recommended actions prioritized by risk.
## Report ready for CBIRC Q2 submission.
Example 2: Microsoft Agent 365 Governance Setup
Input:
We are deploying Microsoft Agent 365 across 200 users.
IT needs to configure governance controls before go-live.
Finance and Claims departments need different permission levels.
Output:
## Agent 365 Governance Configuration
### Recommended Permission Matrix
| Department | Agent Capabilities | Data Scope | Human-in-Loop | Audit Level |
|------------|-------------------|------------|---------------|-------------|
| Finance | Report generation, variance analysis | Financial data only | Required for >¥100K | Full |
| Claims | Document review, fraud flag | Claims data | Required for >¥10K | Full |
| Sales | Lead scoring, email drafting | CRM data | Advisory only | Standard |
### Governance Policies to Enable
1. Data Loss Prevention (DLP) rules for PII in agent prompts
2. Agent activity logging to Sentinel/Log Analytics
3. Approval workflows for high-stakes agent actions
4. Monthly governance review dashboard
### Shadow AI Pre-emption
Block list: [personal-ai-tool-1.com, ai-tool-free.xyz]
Allow list: [Copilot, Agent 365, approved-vendor-ai.com]
Notes & Best Practices
- Start with inventory before policy. You cannot govern what you cannot see.
- China-specific: For CBIRC/CFCA regulated entities, always include PIPL (个人信息保护法) compliance in the policy framework. Agents processing insurance claims data are subject to strict data minimization requirements.
- Human-in-the-loop is non-negotiable for any agent making or materially influencing financial decisions.
- Shadow AI is the #1 undetected risk — prioritize network-level API monitoring.
- Update agent registry quarterly — AI agent proliferation is fast; stale inventories create blind spots.
- Leverage Microsoft Purview for data classification feeding into agent governance policies.
- Regulatory alignment: Check current CBIRC AI guidance, CFCA fintech guidelines, and the generative AI regulation framework when generating policies.
Author: @gechengling | Skill: agent-governance-assistant | clawhub.ai/gechengling/agent-governance-assistant