pydantic-ai-agent-creation

Creating PydanticAI Agents

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "pydantic-ai-agent-creation" with this command: npx skills add existential-birds/beagle/existential-birds-beagle-pydantic-ai-agent-creation

Creating PydanticAI Agents

Quick Start

from pydantic_ai import Agent

Minimal agent (text output)

agent = Agent('openai:gpt-4o') result = agent.run_sync('Hello!') print(result.output) # str

Model Selection

Model strings follow provider:model-name format:

OpenAI

agent = Agent('openai:gpt-4o') agent = Agent('openai:gpt-4o-mini')

Anthropic

agent = Agent('anthropic:claude-sonnet-4-5') agent = Agent('anthropic:claude-haiku-4-5')

Google

agent = Agent('google-gla:gemini-2.0-flash') agent = Agent('google-vertex:gemini-2.0-flash')

Others: groq:, mistral:, cohere:, bedrock:, etc.

Structured Outputs

Use Pydantic models for validated, typed responses:

from pydantic import BaseModel from pydantic_ai import Agent

class CityInfo(BaseModel): city: str country: str population: int

agent = Agent('openai:gpt-4o', output_type=CityInfo) result = agent.run_sync('Tell me about Paris') print(result.output.city) # "Paris" print(result.output.population) # int, validated

Agent Configuration

agent = Agent( 'openai:gpt-4o', output_type=MyOutput, # Structured output type deps_type=MyDeps, # Dependency injection type instructions='You are helpful.', # Static instructions retries=2, # Retry attempts for validation name='my-agent', # For logging/tracing model_settings=ModelSettings( # Provider settings temperature=0.7, max_tokens=1000 ), end_strategy='early', # How to handle tool calls with results )

Running Agents

Three execution methods:

Async (preferred)

result = await agent.run('prompt', deps=my_deps)

Sync (convenience)

result = agent.run_sync('prompt', deps=my_deps)

Streaming

async with agent.run_stream('prompt') as response: async for chunk in response.stream_output(): print(chunk, end='')

Instructions vs System Prompts

Instructions: Concatenated, for agent behavior

agent = Agent( 'openai:gpt-4o', instructions='You are a helpful assistant. Be concise.' )

Dynamic instructions via decorator

@agent.instructions def add_context(ctx: RunContext[MyDeps]) -> str: return f"User ID: {ctx.deps.user_id}"

System prompts: Static, for model context

agent = Agent( 'openai:gpt-4o', system_prompt=['You are an expert.', 'Always cite sources.'] )

Common Patterns

Parameterized Agent (Type-Safe)

from dataclasses import dataclass from pydantic_ai import Agent, RunContext

@dataclass class Deps: api_key: str user_id: int

agent: Agent[Deps, str] = Agent( 'openai:gpt-4o', deps_type=Deps, )

deps is now required and type-checked

result = agent.run_sync('Hello', deps=Deps(api_key='...', user_id=123))

No Dependencies (Satisfy Type Checker)

Option 1: Explicit type annotation

agent: Agent[None, str] = Agent('openai:gpt-4o')

Option 2: Pass deps=None

result = agent.run_sync('Hello', deps=None)

Decision Framework

Scenario Configuration

Simple text responses Agent(model)

Structured data extraction Agent(model, output_type=MyModel)

Need external services Add deps_type=MyDeps

Validation retries needed Increase retries=3

Debugging/monitoring Set instrument=True

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

deepagents-architecture

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

deepagents-implementation

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

langgraph-code-review

No summary provided by upstream source.

Repository SourceNeeds Review
General

tailwind-v4

No summary provided by upstream source.

Repository SourceNeeds Review