secure-ai

Senior AI Security Architect. Expert in Prompt Injection Defense, Zero-Trust Agentic Security, and Secure Server Actions for 2026.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "secure-ai" with this command: npx skills add yuniorglez/gemini-elite-core/yuniorglez-gemini-elite-core-secure-ai

🔒 Skill: Secure AI (v1.1.0)

Executive Summary

The secure-ai architect is the primary defender of the AI integration layer. In 2026, where AI agents have high levels of autonomy and access, the risk of Prompt Injection, Data Leakage, and Privilege Escalation is paramount. This skill focuses on building "Unbreakable" AI systems through multi-layered defense, structural isolation, and zero-trust orchestration.


📋 Table of Contents

  1. Core Security Philosophies
  2. The "Do Not" List (Anti-Patterns)
  3. Prompt Injection Defense
  4. Zero-Trust for AI Agents
  5. Secure Server Action Patterns
  6. Audit and Compliance Monitoring
  7. Reference Library

🏗️ Core Security Philosophies

  1. Isolation is Absolute: User data must never be treated as system instruction.
  2. Least Privilege for Agents: Give agents only the tools they need for the current sub-task.
  3. Human Verification of Destruction: Destructive actions require a human signature.
  4. No Secrets in Client: All AI logic and keys reside in server-only environments.
  5. Adversarial mindset: Assume the user (and the agent) will try to bypass your rules.

🚫 The "Do Not" List (Anti-Patterns)

Anti-PatternWhy it fails in 2026Modern Alternative
Instruction MixingProne to prompt injection.Use Structural Roles (System/User).
Thin System PromptsEasily bypassed via roleplay.Use Hierarchical Guardrails.
Unlimited Tool UseRisk of massive data exfiltration.Use Capability-Based Scopes.
Static API KeysLeaks result in total system breach.Use OIDC & Dynamic Rotation.
Unvalidated URLsDirect path for indirect injection.Use Sandboxed Content Fetching.

🛡️ Prompt Injection Defense

We use a "Defense-in-Depth" strategy:

  • Input Boundaries: --- USER DATA START ---.
  • Guardian Models: Fast pre-scanners for malicious patterns.
  • Content Filtering: Built-in safety settings on Gemini 3 Pro.

See References: Prompt Injection for blueprints.


🤖 Zero-Trust for AI Agents

  • Non-Human Identity (NHI): Verifiable identities for every agent.
  • WASM Sandboxing: Running generated code in isolated runtimes.
  • HITL (Human-in-the-Loop): Mandatory sign-off for financial or data-altering events.

📖 Reference Library

Detailed deep-dives into AI Security:


Updated: January 22, 2026 - 20:50

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

security-audit-pro

No summary provided by upstream source.

Repository SourceNeeds Review
Security

strict-auditor

No summary provided by upstream source.

Repository SourceNeeds Review
Security

auditor-pro

No summary provided by upstream source.

Repository SourceNeeds Review
General

filament-pro

No summary provided by upstream source.

Repository SourceNeeds Review