ai sdk ui

AI SDK UI Development

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai sdk ui" with this command: npx skills add constellos/claude-code-plugins/constellos-claude-code-plugins-ai-sdk-ui

AI SDK UI Development

Purpose

Implement AI-powered user interfaces with the Vercel AI SDK. This skill covers streaming UI patterns, conversational interfaces, completion features, tool calling with visual feedback, and generative UI using React Server Components.

When to use:

  • Adding chat interfaces or conversational features

  • Implementing streaming text/content display

  • Building AI completion features (autocomplete, suggestions)

  • Creating tool calling UIs with visual feedback

  • Implementing generative UI with Server Components

Core Concepts

Client-Side Hooks

The AI SDK provides React hooks for client-side AI interactions:

useChat - Conversational interfaces with message history:

'use client';

import { useChat } from 'ai/react';

export function ChatInterface() { const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({ api: '/api/chat', });

return ( <div className="flex flex-col h-full"> <div className="flex-1 overflow-y-auto p-4 space-y-4"> {messages.map((message) => ( <div key={message.id} className={message.role === 'user' ? 'text-right' : 'text-left'} > <div className={inline-block p-3 rounded-lg ${ message.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100 text-gray-900' }}> {message.content} </div> </div> ))} </div>

  &#x3C;form onSubmit={handleSubmit} className="p-4 border-t">
    &#x3C;div className="flex gap-2">
      &#x3C;input
        value={input}
        onChange={handleInputChange}
        placeholder="Type a message..."
        className="flex-1 p-2 border rounded"
        disabled={isLoading}
      />
      &#x3C;button
        type="submit"
        disabled={isLoading}
        className="px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
      >
        {isLoading ? 'Sending...' : 'Send'}
      &#x3C;/button>
    &#x3C;/div>
  &#x3C;/form>
&#x3C;/div>

); }

useCompletion - Single completions without message history:

'use client';

import { useCompletion } from 'ai/react';

export function CompletionInput() { const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({ api: '/api/completion', });

return ( <div className="space-y-4"> <form onSubmit={handleSubmit}> <textarea value={input} onChange={handleInputChange} placeholder="Enter prompt..." className="w-full p-3 border rounded" rows={4} /> <button type="submit" disabled={isLoading} className="mt-2 px-4 py-2 bg-green-500 text-white rounded" > {isLoading ? 'Generating...' : 'Generate'} </button> </form>

  {completion &#x26;&#x26; (
    &#x3C;div className="p-4 bg-gray-50 rounded">
      &#x3C;h3 className="font-semibold mb-2">Result:&#x3C;/h3>
      &#x3C;p className="whitespace-pre-wrap">{completion}&#x3C;/p>
    &#x3C;/div>
  )}
&#x3C;/div>

); }

Server-Side API Routes

Create API routes that stream responses:

Chat API Route (app/api/chat/route.ts ):

import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai';

export async function POST(req: Request) { const { messages } = await req.json();

const result = streamText({ model: openai('gpt-4o'), messages, system: 'You are a helpful assistant.', });

return result.toDataStreamResponse(); }

Completion API Route (app/api/completion/route.ts ):

import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai';

export async function POST(req: Request) { const { prompt } = await req.json();

const result = streamText({ model: openai('gpt-4o'), prompt, });

return result.toDataStreamResponse(); }

Streaming UI Patterns

Token-by-token streaming display:

'use client';

import { useChat } from 'ai/react';

export function StreamingChat() { const { messages, input, handleInputChange, handleSubmit } = useChat();

return ( <div> {messages.map((message) => ( <div key={message.id}> <strong>{message.role}:</strong> {/* Content streams in token-by-token */} <span className="animate-pulse">{message.content}</span> </div> ))} <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> <button type="submit">Send</button> </form> </div> ); }

Loading states and indicators:

'use client';

import { useChat } from 'ai/react';

export function ChatWithLoadingStates() { const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat();

return ( <div> {messages.map((message) => ( <div key={message.id}>{message.content}</div> ))}

  {isLoading &#x26;&#x26; (
    &#x3C;div className="flex items-center gap-2 text-gray-500">
      &#x3C;div className="animate-spin h-4 w-4 border-2 border-gray-300 border-t-blue-500 rounded-full" />
      &#x3C;span>AI is thinking...&#x3C;/span>
    &#x3C;/div>
  )}

  {error &#x26;&#x26; (
    &#x3C;div className="text-red-500 p-2 bg-red-50 rounded">
      Error: {error.message}
    &#x3C;/div>
  )}

  &#x3C;form onSubmit={handleSubmit}>
    &#x3C;input
      value={input}
      onChange={handleInputChange}
      disabled={isLoading}
    />
    &#x3C;button type="submit" disabled={isLoading}>
      Send
    &#x3C;/button>
  &#x3C;/form>
&#x3C;/div>

); }

Tool Calling UI

Implement tools that Claude can call with visual feedback:

Server-side tool definition:

import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod';

export async function POST(req: Request) { const { messages } = await req.json();

const result = streamText({ model: openai('gpt-4o'), messages, tools: { getWeather: tool({ description: 'Get current weather for a location', parameters: z.object({ location: z.string().describe('City name'), }), execute: async ({ location }) => { // Fetch weather data return { temperature: 72, condition: 'sunny', location }; }, }), searchProducts: tool({ description: 'Search for products', parameters: z.object({ query: z.string(), maxResults: z.number().optional().default(5), }), execute: async ({ query, maxResults }) => { // Search products return { results: [], query, count: 0 }; }, }), }, });

return result.toDataStreamResponse(); }

Client-side tool result rendering:

'use client';

import { useChat } from 'ai/react';

function WeatherCard({ data }: { data: { temperature: number; condition: string; location: string } }) { return ( <div className="p-4 bg-blue-50 rounded-lg"> <h3 className="font-semibold">{data.location}</h3> <p className="text-2xl">{data.temperature}°F</p> <p className="text-gray-600">{data.condition}</p> </div> ); }

export function ChatWithTools() { const { messages, input, handleInputChange, handleSubmit } = useChat({ maxSteps: 5, // Allow multi-step tool use });

return ( <div> {messages.map((message) => ( <div key={message.id}> {message.role === 'assistant' && message.toolInvocations?.map((tool) => ( <div key={tool.toolCallId}> {tool.toolName === 'getWeather' && tool.state === 'result' && ( <WeatherCard data={tool.result} /> )} {tool.state === 'call' && ( <div className="animate-pulse p-2 bg-gray-100 rounded"> Calling {tool.toolName}... </div> )} </div> ))} {message.content && <p>{message.content}</p>} </div> ))} <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> <button type="submit">Send</button> </form> </div> ); }

Generative UI with Server Components

Use streamUI for server-side streaming of React components:

Server Action with streamUI:

'use server';

import { openai } from '@ai-sdk/openai'; import { streamUI } from 'ai/rsc'; import { z } from 'zod';

export async function generateUI(prompt: string) { const result = await streamUI({ model: openai('gpt-4o'), prompt, tools: { showWeather: { description: 'Show weather widget', parameters: z.object({ location: z.string(), temperature: z.number(), }), generate: async function* ({ location, temperature }) { yield <div className="animate-pulse">Loading weather...</div>;

      // Simulate API call
      await new Promise(resolve => setTimeout(resolve, 1000));

      return (
        &#x3C;div className="p-4 bg-gradient-to-r from-blue-400 to-blue-600 text-white rounded-lg">
          &#x3C;h3 className="text-xl font-bold">{location}&#x3C;/h3>
          &#x3C;p className="text-3xl">{temperature}°F&#x3C;/p>
        &#x3C;/div>
      );
    },
  },
  showStockChart: {
    description: 'Show stock price chart',
    parameters: z.object({
      symbol: z.string(),
      price: z.number(),
    }),
    generate: async function* ({ symbol, price }) {
      yield &#x3C;div>Loading {symbol} data...&#x3C;/div>;

      return (
        &#x3C;div className="p-4 border rounded-lg">
          &#x3C;h3 className="font-bold">{symbol}&#x3C;/h3>
          &#x3C;p className="text-2xl text-green-600">${price}&#x3C;/p>
        &#x3C;/div>
      );
    },
  },
},

});

return result.value; }

Client component consuming streamUI:

'use client';

import { useState } from 'react'; import { generateUI } from './actions';

export function GenerativeUIDemo() { const [ui, setUI] = useState<React.ReactNode>(null); const [prompt, setPrompt] = useState(''); const [isLoading, setIsLoading] = useState(false);

async function handleSubmit(e: React.FormEvent) { e.preventDefault(); setIsLoading(true);

const result = await generateUI(prompt);
setUI(result);
setIsLoading(false);

}

return ( <div className="space-y-4"> <form onSubmit={handleSubmit} className="flex gap-2"> <input value={prompt} onChange={(e) => setPrompt(e.target.value)} placeholder="Ask about weather, stocks..." className="flex-1 p-2 border rounded" /> <button type="submit" disabled={isLoading} className="px-4 py-2 bg-purple-500 text-white rounded" > Generate </button> </form>

  &#x3C;div className="min-h-[200px] p-4 border rounded">
    {ui || &#x3C;p className="text-gray-400">Generated UI will appear here&#x3C;/p>}
  &#x3C;/div>
&#x3C;/div>

); }

Workflow

  • Choose AI pattern: Determine if use case needs chat (useChat), completion (useCompletion), or generative UI (streamUI)

  • Create API route: Set up server-side streaming endpoint with appropriate model and tools

  • Implement client hook: Use the corresponding React hook with proper configuration

  • Add streaming UI: Display content as it streams with appropriate loading states

  • Handle tool calls: Render tool results with custom components

  • Add error handling: Handle network errors, rate limits, and API failures gracefully

Best Practices

Performance:

  • Use maxSteps to limit tool calling depth

  • Implement proper loading states for better UX

  • Consider debouncing user input for completion features

Error Handling:

  • Always handle the error state from hooks

  • Provide retry mechanisms for failed requests

  • Show user-friendly error messages

Accessibility:

  • Announce streaming content to screen readers

  • Provide keyboard navigation for chat interfaces

  • Include proper ARIA labels on interactive elements

Security:

  • Validate user input before sending to AI

  • Sanitize AI-generated content before rendering

  • Use rate limiting on API routes

Additional Resources

Reference Files

For detailed patterns and advanced techniques:

  • references/advanced-patterns.md
  • Multi-modal AI, conversation memory, custom providers

External Documentation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

feature-sliced design

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

ui wireframing

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

supabase local development

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

ui design

No summary provided by upstream source.

Repository SourceNeeds Review