Stack Guides logo StackGuides
Langgraph
1. Quick Start

1.Quick Start

LangGraph provides the infrastructure for building stateful, multi-actor applications that leverage large language models. Unlike simple prompt-response chains, LangGraph applications maintain state across interactions, support complex branching logic, and provide durability through persistence mechanisms. This chapter establishes the foundational workflow: installing the library, constructing a basic graph structure, defining processing nodes, executing the workflow, and visualizing the resulting architecture.

Environment Setup

Before constructing graphs, we need to install LangGraph and verify the environment. LangGraph supports both Python and JavaScript/TypeScript ecosystems, with nearly identical conceptual APIs across languages.

For Python environments, installation uses standard package managers. The base langgraph package contains the core graph execution engine and state management primitives. If we plan to integrate with language models through LangChain (the recommended approach for this tutorial), we should also install langchain and provider-specific packages such as langchain-openai or langchain-anthropic.

pip install -U langgraph
pip install -U langchain langchain-openai

For JavaScript or TypeScript projects, the core library is distributed as @langchain/langgraph. We install it alongside @langchain/core for message types and the specific LLM client library we intend to use.

npm install @langchain/langgraph @langchain/core @langchain/openai

After installation, we should configure API keys for our chosen LLM provider. LangGraph itself does not require an API key, but any non-trivial graph will invoke language models that do. We typically store these in environment variables or a .env file, loading them before graph execution.

Building Your First StateGraph

The fundamental abstraction in LangGraph is the StateGraph. This class represents a computational graph where nodes perform work and edges determine control flow. Unlike dataflow pipelines that simply transform data, StateGraph maintains a shared state object that evolves as execution progresses through the graph.

To begin, we import the necessary components. In Python, we use StateGraph from langgraph.graph, along with START and END constants that represent special entry and exit nodes. We also need to define our state schema, which determines what data the graph tracks across steps.

from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict

class State(TypedDict):
    message: str
    count: int

builder = StateGraph(State)

In this example, State defines two fields: a message string and an integer count. The StateGraph is parameterized with this schema, ensuring type safety throughout the graph construction. The schema functions as both the input and output contract for the graph, though LangGraph allows separating these concerns for advanced use cases.

The graph construction follows a builder pattern. We instantiate StateGraph with our state type, then incrementally add nodes and edges. Nodes represent discrete computation units, while edges define the transitions between them. The graph remains mutable during construction but becomes immutable after compilation.

Defining a Chatbot Processing Node

Nodes in LangGraph are functions that read the current state, perform computation, and return state updates. A node can be any callable—synchronous or asynchronous—that accepts the state as its primary argument. When a node executes, it receives the current state snapshot and returns a dictionary containing updates to specific state keys.

Let us define a simple chatbot node that appends a response to our message field. This node simulates an AI response by transforming the input message.

from langchain_core.messages import AIMessage, HumanMessage

def chatbot_node(state: State):
    # Read current state
    current_message = state["message"]
    
    # Generate response (simulated here, typically an LLM call)
    response = f"Received: {current_message}"
    
    # Return state update
    return {"message": response, "count": state["count"] + 1}

Key observations about this pattern: First, nodes do not mutate the state object directly. Instead, they return a dictionary describing how to update the state. LangGraph applies these updates using reducer functions—by default, the last value wins for each key, though we can configure custom reducers for complex behaviors like appending to lists.

Second, nodes only need to return the keys they modify. If a node does not return a key, that portion of the state remains unchanged. This partial update mechanism allows different nodes to manage different aspects of the application state without interfering with each other.

Third, while this example uses a simple dictionary for state updates, nodes can return Command objects that combine state updates with routing decisions, a technique we will explore in advanced chapters. For now, we focus on basic state transitions.

We add this node to our builder using the add_node method. The first argument is a string identifier used to reference the node when defining edges, and the second is the callable function.

builder.add_node("chatbot", chatbot_node)

The node name serves as the address for routing. While we can let LangGraph infer names from function objects, explicit naming prevents refactoring issues and improves graph readability.

Compiling and Executing the Graph

Raw graph definitions lack the runtime infrastructure needed for execution. Compilation transforms the builder configuration into an executable Pregel instance—LangGraph's runtime engine inspired by Google's Pregel graph processing framework. This step performs structural validation, ensuring no orphaned nodes exist and that all routing paths resolve to valid targets.

Compilation also binds runtime configuration such as checkpointers for persistence, though we defer detailed persistence configuration to later chapters. For our first execution, we compile with default settings.

graph = builder.compile()

The compiled graph exposes several execution methods. The most straightforward is invoke, which runs the graph synchronously from start to finish, returning the final state. We must provide initial state values for at least the required keys, and we can pass a configuration object containing runtime parameters like thread_id.

initial_state = {"message": "Hello, LangGraph!", "count": 0}
result = graph.invoke(initial_state, config={"configurable": {"thread_id": "session-1"}})
print(result)

Execution proceeds through supersteps—discrete iterations where all active nodes run in parallel, their updates batched and applied atomically. In our simple linear graph, this means the chatbot node executes once, updates the state, and the graph terminates.

The thread_id in the configuration hints at LangGraph's persistence capabilities. Even without explicit configuration, LangGraph tracks execution threads, allowing inspection of intermediate states. When we add a checkpointer in subsequent chapters, this thread identifier enables resuming conversations after interruptions.

For asynchronous environments, LangGraph provides ainvoke, astream, and other async variants. These are essential for I/O-bound nodes that call external APIs or language models, preventing event loop blocking during graph execution.

Visualizing Graph Structure

As graphs grow complex, visualizing their structure becomes essential for debugging and team communication. LangGraph provides built-in visualization utilities that generate diagrams from the compiled graph structure.

The primary method generates Mermaid diagram syntax—a text-based format supported by many documentation platforms and version control systems. We can render this syntax using Mermaid.js or export it directly to PNG images.

from IPython.display import Image, display

# Generate Mermaid syntax
mermaid_syntax = graph.get_graph().draw_mermaid()
print(mermaid_syntax)

The Mermaid output represents nodes as boxes and edges as arrows, clearly showing the flow from START through our chatbot node to END. This textual representation is useful for documentation and pull request reviews, where binary images might be cumbersome.

For immediate visual inspection, particularly in Jupyter notebooks, we can generate PNG images directly. This requires Graphviz installation or uses Pyppeteer to render Mermaid syntax to an image buffer.

display(Image(graph.get_graph().draw_mermaid_png()))

The resulting diagram shows our linear flow: START → chatbot → END. In more complex graphs with conditional edges, parallel branches, or cycles, these diagrams prove invaluable for verifying that the control flow matches our intent. They reveal routing logic that might be obscured in code, making them essential tools for reviewing multi-agent architectures or approval workflows.

Visualization also serves as a validation mechanism. If the diagram shows an unexpected loop or disconnected component, the graph structure likely contains logic errors that compilation did not catch. This visual feedback loop accelerates development, allowing us to iterate on graph topology before investing in node implementation details.

A Complete Example

Let us assemble these concepts into a runnable example that demonstrates the complete lifecycle. We construct a graph that simulates a conversation turn, taking user input, processing it through a mock LLM node, and returning the response.

from langgraph.graph import StateGraph, START, END
from langchain_core.messages import HumanMessage, AIMessage, AnyMessage
from typing_extensions import TypedDict, Annotated
import operator

# Define state with message list using reducer to append
class ChatState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

# Define the processing node
def assistant(state: ChatState):
    # In production, this would call an actual LLM
    last_message = state["messages"][-1].content
    response = f"Processing: {last_message}"
    return {"messages": [AIMessage(content=response)]}

# Build the graph
builder = StateGraph(ChatState)
builder.add_node("assistant", assistant)
builder.add_edge(START, "assistant")
builder.add_edge("assistant", END)

# Compile
graph = builder.compile()

# Execute
inputs = {"messages": [HumanMessage(content="Explain LangGraph")]}
result = graph.invoke(inputs)

This example introduces Annotated with operator.add, which configures a reducer function for the messages list. Rather than replacing the list, each update appends new messages. This pattern is so common for conversational applications that LangGraph provides MessagesState—a prebuilt state class with this configuration—but understanding the underlying mechanism prepares us for custom state designs.

When we invoke this graph, the execution engine initializes the state with our input, routes to the assistant node, applies the returned update (appending the AIMessage), and terminates at END. The final state contains both the original human message and the generated AI response.

Summary

We have established the foundational LangGraph workflow: defining state schemas, constructing graphs with nodes and edges, compiling for execution, and visualizing the resulting structure. These primitives—StateGraph, nodes as state-transforming functions, and the compile-then-execute pattern—underpin all subsequent LangGraph development.

The examples in this chapter used simple state updates and linear flows. In the next chapter, we explore state management in depth: designing robust TypedDict schemas, configuring reducer functions for complex state transformations, leveraging prebuilt MessagesState for conversational applications, and separating input from output schemas to create clean architectural boundaries.