Building Your First AI Chatbot with A2A Protocol and LangGraph

Authors
  • avatar
    Name
    Nino
    Occupation
    Senior Tech Editor

Building a chatbot today often feels like creating an isolated island. Most developers build proprietary APIs that only their own front-end can talk to. However, the future of the AI ecosystem lies in interoperability—where agents can discover and communicate with each other seamlessly. This is where the Agent-to-Agent (A2A) protocol and LangGraph come into play.

In this guide, we will walk through the architectural patterns and implementation details of a chatbot that speaks the A2A protocol. By leveraging n1n.ai for high-speed LLM access, you can ensure your agent is both standard-compliant and performant.

Why A2A and LangGraph?

Before we write a single line of code, it is crucial to understand the 'why.'

  1. A2A Protocol: Think of this as the HTTP of the AI world. It provides a standardized way for agents to describe their capabilities via an 'Agent Card' and exchange messages via a consistent JSON-RPC structure. This prevents vendor lock-in and allows your agent to be part of a larger network of collaborating intelligences.
  2. LangGraph: While LangChain is great for simple chains, LangGraph allows for cyclic, stateful multi-agent orchestration. It treats the conversation as a graph where nodes represent computations and edges represent the flow of state.
  3. The n1n.ai Edge: To power these agents, you need reliable, low-latency access to models like GPT-4o-mini or Claude 3.5 Sonnet. Using n1n.ai as your API aggregator provides a single entry point to the world's best models with superior uptime.

Prerequisites

Ensure your environment meets the following requirements:

  • Python 3.12+: Modern syntax and performance improvements are required.
  • UV Package Manager: A blazing-fast Python package tool. Install it via: curl -LsSf https://astral.sh/uv/install.sh | sh.
  • API Access: An API key from n1n.ai to access the underlying LLMs.

Core Concepts of the A2A Protocol

The A2A protocol relies on three primary pillars:

1. The Agent Card

Located at /.well-known/agent-card.json, this is the business card for your AI. It defines the agent's name, description, capabilities (like streaming), and specific skills. A skill might be 'General Chat' or 'Data Analysis,' complete with example prompts to help other agents understand how to interact with it.

2. Message Exchange

A standardized JSON-RPC 2.0 interface for sending and receiving messages. This ensures that whether a human or another agent is talking to your bot, the request/response format remains identical.

3. Agent Executor

This is the bridge. It translates incoming A2A requests into logic your internal system (LangGraph) can understand, and then queues the response for the protocol layer to deliver.

Step 1: Project Initialization

Start by creating a clean workspace with uv:

mkdir lg-a2a-chatbot
cd lg-a2a-chatbot
uv venv
uv init

Update your pyproject.toml with the necessary dependencies:

[project]
name = "lg-a2a"
version = "0.1.0"
dependencies = [
    "a2a-sdk[http-server]>=0.3",
    "langgraph>=0.2",
    "langchain-core>=0.3",
    "langchain-openai>=0.2",
    "uvicorn[standard]>=0.30",
    "python-dotenv>=1.0.1"
]

Run uv sync to install everything in an isolated environment.

Step 2: Defining the LangGraph Logic

Create agent.py. This file contains the 'brain' of our chatbot. We use a simple state graph with a single node that invokes the LLM.

from typing import Annotated
from langchain_core.messages import BaseMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
from typing_extensions import TypedDict

class ChatState(TypedDict):
    # add_messages ensures new messages are appended to the history
    messages: Annotated[list[BaseMessage], add_messages]

def model_node(state: ChatState):
    # Pro Tip: Use n1n.ai base_url for unified access to multiple providers
    model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
    response = model.invoke(state["messages"])
    return {"messages": [response]}

def build_agent():
    builder = StateGraph(ChatState)
    builder.add_node("model", model_node)
    builder.add_edge(START, "model")
    builder.add_edge("model", END)
    return builder.compile()

chatbot_agent = build_agent()

Step 3: The A2A Agent Executor

Now we need to bridge A2A and LangGraph. Create agent_executor.py. The execute method is the workhorse here.

from a2a.server.agent_execution import AgentExecutor, RequestContext
from a2a.server.events import EventQueue
from a2a.utils import new_agent_text_message
from langchain_core.messages import HumanMessage
from agent import chatbot_agent

class ChatbotAgentExecutor(AgentExecutor):
    async def execute(self, context: RequestContext, event_queue: EventQueue) -> None:
        # Extract text from the A2A protocol context
        user_input = context.get_user_input()

        if not user_input.strip():
            await event_queue.enqueue_event(new_agent_text_message("Input cannot be empty."))
            return

        # Invoke the LangGraph agent
        result = await chatbot_agent.ainvoke({"messages": [HumanMessage(content=user_input)]})

        # Get the final response from the graph state
        reply = result["messages"][-1].content

        # Enqueue back to the A2A system
        await event_queue.enqueue_event(new_agent_text_message(reply))

    async def cancel(self, context: RequestContext, event_queue: EventQueue) -> None:
        raise NotImplementedError("Cancellation not implemented for this tutorial.")

Step 4: Launching the Server

Create server.py to host the agent. This includes the Agent Card configuration.

import uvicorn
from a2a.server.apps import A2AStarletteApplication
from a2a.server.request_handlers import DefaultRequestHandler
from a2a.server.tasks import InMemoryTaskStore
from a2a.types import AgentCapabilities, AgentCard, AgentSkill
from agent_executor import ChatbotAgentExecutor
from dotenv import load_dotenv

load_dotenv()

def main():
    skill = AgentSkill(
        id="general-chat",
        name="General Chat",
        description="A helpful assistant powered by LangGraph.",
        examples=["Tell me about A2A protocol", "How does LangGraph work?"]
    )

    card = AgentCard(
        name="LangGraph-A2A-Bot",
        description="A standardized AI agent example.",
        url="http://localhost:9999/",
        capabilities=AgentCapabilities(streaming=True),
        skills=[skill]
    )

    handler = DefaultRequestHandler(
        agent_executor=ChatbotAgentExecutor(),
        task_store=InMemoryTaskStore()
    )

    app = A2AStarletteApplication(agent_card=card, http_handler=handler).build()
    uvicorn.run(app, host="0.0.0.0", port=9999)

if __name__ == "__main__":
    main()

Pro Tips for Production

  1. Context Management: In this tutorial, we create a new state for every message. In production, use the context_id from the A2A request to retrieve previous conversation history from a database like Redis or PostgreSQL.
  2. Streaming: A2A supports message/stream. You can use LangGraph's .astream() method to push tokens into the event_queue as they arrive, significantly reducing perceived latency.
  3. Model Fallbacks: By using n1n.ai, you can easily switch from GPT-4o-mini to DeepSeek-V3 or Claude 3.5 Sonnet if one provider experiences downtime or if you need higher reasoning capabilities for specific tasks.

Conclusion

You have now built a chatbot that is not just another isolated API, but a citizen of the emerging Agent-to-Agent ecosystem. By combining the structured logic of LangGraph with the interoperability of the A2A protocol, and the reliable infrastructure of n1n.ai, you are ready to scale.

Get a free API key at n1n.ai