langchain-hello-world

LangChain Hello World

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "langchain-hello-world" with this command: npx skills add jeremylongshore/claude-code-plugins-plus-skills/jeremylongshore-claude-code-plugins-plus-skills-langchain-hello-world

LangChain Hello World

Overview

Minimal working example demonstrating core LangChain functionality with chains and prompts.

Prerequisites

  • Completed langchain-install-auth setup

  • Valid LLM provider API credentials configured

  • Python 3.9+ or Node.js 18+ environment ready

Instructions

Step 1: Create Entry File

Create a new file hello_langchain.py for your hello world example.

Step 2: Import and Initialize

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")

Step 3: Create Your First Chain

from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), ("user", "{input}") ])

chain = prompt | llm | StrOutputParser()

response = chain.invoke({"input": "Hello, LangChain!"}) print(response)

Output

  • Working Python file with LangChain chain

  • Successful LLM response confirming connection

  • Console output showing:

Hello! I'm your LangChain-powered assistant. How can I help you today?

Error Handling

Error Cause Solution

Import Error SDK not installed Run pip install langchain langchain-openai

Auth Error Invalid credentials Check environment variable is set

Timeout Network issues Increase timeout or check connectivity

Rate Limit Too many requests Wait and retry with exponential backoff

Model Not Found Invalid model name Check available models in provider docs

Examples

Simple Chain (Python)

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4o-mini") prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}") chain = prompt | llm | StrOutputParser()

result = chain.invoke({"topic": "programming"}) print(result)

With Memory (Python)

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.messages import HumanMessage, AIMessage

llm = ChatOpenAI(model="gpt-4o-mini") prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), MessagesPlaceholder(variable_name="history"), ("user", "{input}") ])

chain = prompt | llm

history = [] response = chain.invoke({"input": "Hi!", "history": history}) print(response.content)

TypeScript Example

import { ChatOpenAI } from "@langchain/openai"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { StringOutputParser } from "@langchain/core/output_parsers";

const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" }); const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}"); const chain = prompt.pipe(llm).pipe(new StringOutputParser());

const result = await chain.invoke({ topic: "LangChain" }); console.log(result);

Resources

  • LangChain LCEL Guide

  • Prompt Templates

  • Output Parsers

Next Steps

Proceed to langchain-local-dev-loop for development workflow setup.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

tracking-crypto-prices

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

aggregating-crypto-news

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

tracking-crypto-derivatives

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

tracking-crypto-portfolio

No summary provided by upstream source.

Repository SourceNeeds Review