ai-policy-generator

AI governance policy creation for nonprofits and enterprises with frameworks, risk assessment, ethical guidelines, and compliance templates. Use when drafting AI usage policies, responsible AI frameworks, or organizational AI governance documents.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-policy-generator" with this command: npx skills add travisjneuman/.claude/travisjneuman-claude-ai-policy-generator

AI Policy Generator

Comprehensive frameworks for creating organizational AI governance policies covering acceptable use, risk management, ethical guidelines, data handling, and compliance requirements.

AI Policy Structure

Standard AI Policy Template

AI GOVERNANCE POLICY — [ORGANIZATION NAME]

1. PURPOSE AND SCOPE
   - Why this policy exists
   - Who it applies to (employees, contractors, vendors)
   - What AI systems are covered
   - Effective date and review cadence

2. DEFINITIONS
   - AI / Machine Learning
   - Generative AI
   - Automated decision-making
   - Personal data / Sensitive data
   - High-risk AI use cases

3. ACCEPTABLE USE
   - Approved AI tools and platforms
   - Permitted use cases by department
   - Prohibited uses (explicit list)
   - Approval process for new AI tools

4. DATA AND PRIVACY
   - Data classification for AI inputs
   - Prohibited data types (PII, PHI, confidential)
   - Data retention and deletion
   - Third-party data sharing restrictions

5. RISK ASSESSMENT
   - Risk classification framework (low/medium/high/critical)
   - Required assessments by risk level
   - Approval chain for high-risk deployments
   - Ongoing monitoring requirements

6. TRANSPARENCY AND DISCLOSURE
   - When to disclose AI use to stakeholders
   - Labeling AI-generated content
   - Customer/client notification requirements
   - Internal documentation standards

7. HUMAN OVERSIGHT
   - Human-in-the-loop requirements
   - Decision review thresholds
   - Escalation procedures
   - Override authority

8. BIAS AND FAIRNESS
   - Bias testing requirements
   - Fairness metrics and thresholds
   - Protected class considerations
   - Remediation procedures

9. SECURITY
   - AI-specific security controls
   - Prompt injection prevention
   - Model access controls
   - Incident response for AI failures

10. COMPLIANCE
    - Applicable regulations (EU AI Act, state laws, industry)
    - Audit requirements
    - Record-keeping obligations
    - Reporting requirements

11. TRAINING AND AWARENESS
    - Required training by role
    - Training frequency
    - Competency assessment

12. ENFORCEMENT
    - Violation reporting
    - Consequences framework
    - Appeal process

13. GOVERNANCE
    - AI governance committee composition
    - Review and update cadence
    - Policy exception process
    - Version control

Risk Classification Framework

AI Use Case Risk Levels

Risk LevelDescriptionExamplesRequirements
LowMinimal impact on individuals or operationsSummarizing meeting notes, drafting internal emails, code formattingSelf-service, basic training
MediumModerate impact, reversible decisionsCustomer service drafts, content generation, data analysisManager approval, human review
HighSignificant impact on individuals or financesHiring screening, credit decisions, medical triageCommittee approval, bias audit, monitoring
CriticalPotential for serious harm, legal liabilityAutonomous decisions affecting rights, safety-critical systemsBoard approval, external audit, ongoing review

Risk Assessment Checklist

AI USE CASE RISK ASSESSMENT

Use Case: _____________________
Department: ___________________
Requested By: _________________
Date: ________________________

IMPACT ASSESSMENT:
  [ ] Affects individual rights or opportunities?
  [ ] Involves personal or sensitive data?
  [ ] Makes or influences financial decisions?
  [ ] Affects health, safety, or welfare?
  [ ] Has legal or regulatory implications?
  [ ] Could cause reputational harm?
  [ ] Involves vulnerable populations?

DATA ASSESSMENT:
  [ ] What data types are used as inputs?
  [ ] Is PII/PHI/confidential data involved?
  [ ] Where is data stored and processed?
  [ ] What third parties receive data?
  [ ] Is data retention compliant with policy?

TRANSPARENCY ASSESSMENT:
  [ ] Are affected parties informed of AI use?
  [ ] Is the AI's role in decisions clear?
  [ ] Can decisions be explained?
  [ ] Is there an appeal/override mechanism?

RISK LEVEL: [ ] Low  [ ] Medium  [ ] High  [ ] Critical

REQUIRED APPROVALS:
  [ ] Manager (all levels)
  [ ] AI Governance Committee (medium+)
  [ ] Legal review (high+)
  [ ] Board approval (critical)
  [ ] External audit (critical)

Acceptable Use Guidelines

Approved vs Prohibited Uses

APPROVED USES (with appropriate safeguards):

CONTENT AND COMMUNICATION:
  + Drafting internal communications
  + Summarizing documents and meetings
  + Translating content between languages
  + Brainstorming and ideation
  + Editing and proofreading

RESEARCH AND ANALYSIS:
  + Market research synthesis
  + Data analysis and visualization
  + Literature review assistance
  + Trend identification
  + Competitive analysis

PRODUCTIVITY:
  + Code generation and review
  + Template creation
  + Process documentation
  + FAQ and knowledge base content
  + Scheduling optimization

PROHIBITED USES:

  - Inputting confidential business data into public AI tools
  - Uploading PII, PHI, or financial records to unapproved platforms
  - Using AI for final hiring, firing, or disciplinary decisions
  - Generating content that impersonates real individuals
  - Making autonomous decisions that affect individual rights
  - Bypassing security controls or access restrictions
  - Generating misleading, deceptive, or fraudulent content
  - Using AI to surveil employees without disclosure
  - Submitting AI-generated work as original without disclosure
  - Using AI for any illegal purpose

Regulatory Landscape

Key Regulations by Jurisdiction

RegulationJurisdictionKey RequirementsEffective
EU AI ActEuropean UnionRisk-based classification, prohibited uses, transparency2024-2027 (phased)
Colorado AI ActColorado, USAAlgorithmic discrimination prevention, impact assessments2026
NYC Local Law 144New York CityBias audits for automated employment decisions2023
CPRACalifornia, USARight to opt out of automated decision-making2023
GDPR Art. 22EU/EEARight not to be subject to solely automated decisions2018
Executive Order 14110US FederalAI safety standards, risk management2023
NIST AI RMFUS (voluntary)Risk management framework for AI systems2023
ISO/IEC 42001InternationalAI management system standard2023

Compliance Mapping Template

COMPLIANCE MAPPING:

Regulation: [Name]
Applicable: [ ] Yes  [ ] No  [ ] Partially
Scope: [Which AI uses fall under this regulation]

REQUIREMENT                          | STATUS    | OWNER     | DUE DATE
Risk assessment completed            | [ ]       | [Name]    | [Date]
Transparency notices deployed        | [ ]       | [Name]    | [Date]
Bias audit conducted                 | [ ]       | [Name]    | [Date]
Data protection measures in place    | [ ]       | [Name]    | [Date]
Human oversight mechanism active     | [ ]       | [Name]    | [Date]
Documentation/records maintained     | [ ]       | [Name]    | [Date]
Training completed for staff         | [ ]       | [Name]    | [Date]
Incident response plan updated       | [ ]       | [Name]    | [Date]

Ethical AI Framework

Principles-Based Approach

PrincipleDefinitionImplementation
FairnessAI should not discriminate or create disparate impactRegular bias audits, diverse training data review
TransparencyAI use and decision-making should be understandableExplainability requirements, disclosure policies
AccountabilityClear ownership of AI decisions and outcomesGovernance structure, audit trails
PrivacyRespect for data rights and minimizationData classification, consent frameworks
SafetyAI should not cause harm to individuals or groupsTesting protocols, human oversight, kill switches
BeneficenceAI should benefit the organization and societyImpact assessment, stakeholder engagement

Bias Testing Protocol

BIAS TESTING PROTOCOL:

PRE-DEPLOYMENT:
  1. Define protected characteristics relevant to use case
  2. Prepare representative test datasets
  3. Run model outputs across demographic groups
  4. Calculate disparate impact ratios
  5. Document results and remediation if needed

ONGOING MONITORING:
  Frequency: [Monthly / Quarterly / per regulation]
  Metrics:
    - Demographic parity: Equal selection rates across groups
    - Equalized odds: Equal error rates across groups
    - Calibration: Equal accuracy across groups
  Threshold: Disparate impact ratio < 0.8 triggers review

REMEDIATION:
  1. Identify root cause (data, model, process)
  2. Document corrective action plan
  3. Implement fix and retest
  4. Report to governance committee

Data Handling Guidelines

Data Classification for AI

ClassificationAI Input Allowed?ConditionsExamples
PublicYes, any approved toolStandard use policyPublished reports, press releases
InternalYes, approved enterprise tools onlyNo public AI toolsInternal memos, strategy docs
ConfidentialLimited, with approvalApproved tools + DPA in placeFinancial data, customer info
RestrictedNo (or extreme controls)CTO/CISO approval + encryptionPII, PHI, trade secrets, credentials

Vendor Assessment Checklist

AI VENDOR ASSESSMENT:

Vendor: _____________________
Tool/Service: _______________
Assessment Date: _____________

DATA HANDLING:
  [ ] Data processing agreement (DPA) in place?
  [ ] Where is data processed and stored?
  [ ] Is data used to train vendor's models?
  [ ] Can training opt-out be enforced?
  [ ] Data retention and deletion policies?
  [ ] Encryption at rest and in transit?
  [ ] SOC 2 Type II or equivalent certification?

SECURITY:
  [ ] Access controls and authentication?
  [ ] Audit logging available?
  [ ] Incident response procedures?
  [ ] Penetration testing conducted?
  [ ] Vulnerability management program?

COMPLIANCE:
  [ ] GDPR compliance (if applicable)?
  [ ] HIPAA compliance (if applicable)?
  [ ] Sector-specific certifications?
  [ ] Subprocessor transparency?

RECOMMENDATION: [ ] Approve  [ ] Conditional  [ ] Reject

Training Program Design

Role-Based Training Requirements

RoleTraining TopicsFrequencyAssessment
All employeesAI policy overview, acceptable use, data handlingAnnualQuiz (80% pass)
ManagersRisk assessment, approval workflows, oversightAnnual + refresherScenario-based
IT/EngineeringSecurity controls, prompt injection, model managementSemi-annualTechnical assessment
Legal/ComplianceRegulatory landscape, audit procedures, incident responseSemi-annualCase study review
AI Governance CommitteeFull policy, emerging regulations, industry best practicesQuarterlyParticipation-based
ExecutivesStrategic implications, liability, governanceAnnualBriefing attendance

Policy Maintenance

Review and Update Cadence

POLICY REVIEW SCHEDULE:

ANNUAL REVIEW (minimum):
  - Full policy review by governance committee
  - Regulatory landscape update
  - Incident review and lessons learned
  - Stakeholder feedback incorporation

TRIGGERED REVIEWS:
  - New regulation enacted affecting AI use
  - Significant AI incident (internal or industry)
  - Major new AI tool adoption
  - Organizational restructure
  - Merger/acquisition
  - Audit finding requiring policy change

VERSION CONTROL:
  Version: [X.X]
  Last Updated: [Date]
  Approved By: [Name/Committee]
  Next Review: [Date]
  Change Log: [Summary of changes per version]

See Also

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

document-skills

No summary provided by upstream source.

Repository SourceNeeds Review
General

brand-identity

No summary provided by upstream source.

Repository SourceNeeds Review
General

finance

No summary provided by upstream source.

Repository SourceNeeds Review
General

macos-native

No summary provided by upstream source.

Repository SourceNeeds Review