Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:3000/auth/callback - Set Allowed Logout URLs as:
http://localhost:3000 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Prepare Next.js app
Recommended: To use a starter template, clone the Auth0 AI samples repository:git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/langchain-next-js
Install dependencies
In the root directory of your project, install the following dependencies:
@langchain/langgraph: The core LangGraph module.
@langchain/openai: OpenAI provider for LangChain.
langchain: The core LangChain module.
zod: TypeScript-first schema validation library.
langgraph-nextjs-api-passthrough: API passthrough for LangGraph.
npm install @langchain/langgraph@0.3 @langchain/openai@0.6 langchain@0.3 zod@3 langgraph-nextjs-api-passthrough@0.1
Update the environment file
Copy the .env.example file to .env.local and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.Pass credentials to the agent
You have to pass the access token from the user’s session to the agent. First, create a helper function to get the access token from the session. Add the following function to src/lib/auth0.ts://...
// Get the Access token from Auth0 session
export const getAccessToken = async () => {
const session = await auth0.getSession();
return session?.tokenSet?.accessToken;
};
Now, update the /src/app/api/chat/[..._path]/route.ts file to pass the access token to the agent:src/app/api/chat/[..._path]/route.ts
import { initApiPassthrough } from "langgraph-nextjs-api-passthrough";
import { getAccessToken } from "@/lib/auth0";
export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } =
initApiPassthrough({
apiUrl: process.env.LANGGRAPH_API_URL,
baseRoute: "chat/",
bodyParameters: async (req, body) => {
if (
req.nextUrl.pathname.endsWith("/runs/stream") &&
req.method === "POST"
) {
return {
...body,
config: {
configurable: {
_credentials: {
accessToken: await getAccessToken(),
},
},
},
};
}
return body;
},
});
In this step, you’ll create a LangChain tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.src/lib/tools/user-info.ts
import { tool } from "@langchain/core/tools";
export const getUserInfoTool = tool(
async (_input, config?) => {
// Access credentials from config
const accessToken = config?.configurable?._credentials?.accessToken;
if (!accessToken) {
return "There is no user logged in.";
}
const response = await fetch(
`https://${process.env.AUTH0_DOMAIN}/userinfo`,
{
headers: {
Authorization: `Bearer ${accessToken}`,
},
}
);
if (response.ok) {
return { result: await response.json() };
}
return "I couldn't verify your identity";
},
{
name: "get_user_info",
description: "Get information about the current logged in user.",
}
);
The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Update the /src/lib/agent.ts file to add the tool to the agent.//...
import { getUserInfoTool } from "./tools/user-info";
//... existing code
const tools = [
//... existing tools
getUserInfoTool,
];
//... existing code
You need an API Key from OpenAI or another provider to use an LLM. Add that API key to your .env.local file:# ...
# You can use any provider of your choice supported by Vercel AI
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env.local accordingly.Test your application
To test the application, run npm run all:dev and navigate to http://localhost:3000.This will open the LangGraph Studio in a new tab. You can close it as we won’t
require it for testing the application.
To interact with the AI agent, you can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your project.Explore the example app on GitHub.Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:8000/api/auth/callback - Set Allowed Logout URLs as:
http://localhost:5173 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Prepare the FastAPI app
Recommended: Use the starter template by cloning the Auth0 AI samples repository:git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/langchain-fastapi-py
The project is divided into two parts:
backend/: contains the backend code for the Web app and API written in Python using FastAPI and the LangGraph agent.
frontend/: contains the frontend code for the Web app written in React as a Vite SPA.
Install dependencies
In the backend directory of your project, install the following dependencies:
langgraph: LangGraph for building stateful, multi-actor applications with LLMs.
langchain-openai: LangChain integrations for OpenAI.
langgraph-cli: LangGraph CLI for running a local LangGraph server.
Make sure you have uv installed and run the following command to install the dependencies:cd backend
uv sync
uv add langgraph langchain-openai "langgraph-cli[inmem]"
Update the environment file
Copy the .env.example file to .env and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.Pass credentials to the agent
First, you have to pass the access token from the user’s session to the agent. The FastAPI backend will proxy requests to the LangGraph server with the user’s credentials.Update the API route to pass the access token to the agent in app/api/routes/chat.py:# ...
from app.core.auth import auth_client
# ...
@agent_router.api_route(
"/{full_path:path}", methods=["GET", "POST", "DELETE", "PATCH", "PUT", "OPTIONS"]
)
async def api_route(
request: Request, full_path: str, auth_session=Depends(auth_client.require_session)
):
try:
# ... existing code
# Prepare body
body = await request.body()
if request.method in ("POST", "PUT", "PATCH") and body:
content = await request.json()
content["config"] = {
"configurable": {
"_credentials": {
"access_token": auth_session.get("token_sets")[0].get(
"access_token"
),
}
}
}
body = json.dumps(content).encode("utf-8")
# ... existing code
In this step, you’ll create a LangChain tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.Create a user info tool in app/agents/tools/user_info.py:app/agents/tools/user_info.py
import httpx
from langchain_core.tools import StructuredTool
from langchain_core.runnables.config import RunnableConfig
from pydantic import BaseModel
from app.core.config import settings
class UserInfoSchema(BaseModel):
pass
async def get_user_info_fn(config: RunnableConfig):
"""Get information about the current logged in user from Auth0 /userinfo endpoint."""
# Access credentials from config
if "configurable" not in config or "_credentials" not in config["configurable"]:
return "There is no user logged in."
credentials = config["configurable"]["_credentials"]
access_token = credentials.get("access_token")
if not access_token:
return "There is no user logged in."
try:
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://{settings.AUTH0_DOMAIN}/userinfo",
headers={
"Authorization": f"Bearer {access_token}",
},
)
if response.status_code == 200:
user_info = response.json()
return f"User information: {user_info}"
else:
return "I couldn't verify your identity"
except Exception as e:
return f"Error getting user info: {str(e)}"
get_user_info = StructuredTool(
name="get_user_info",
description="Get information about the current logged in user.",
args_schema=UserInfoSchema,
coroutine=get_user_info_fn,
)
The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Update the app/agents/assistant0.py file to add the tool to the agent:# ...
from app.agents.tools.user_info import get_user_info
tools = [get_user_info]
llm = ChatOpenAI(model="gpt-4.1-mini")
# ... existing code
agent = create_react_agent(
llm,
tools=ToolNode(tools, handle_tool_errors=False),
prompt=get_prompt(),
)
You need an API Key from OpenAI to use the LLM. Add that API key to your .env file:# ...
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env accordingly.Test your application
To test the application, start the FastAPI backend, LangGraph server, and the frontend:
- In a new terminal, start the FastAPI backend:
cd backend
source .venv/bin/activate
fastapi dev app/main.py
- In another terminal, start the LangGraph server:
cd backend
source .venv/bin/activate
uv pip install -U langgraph-api
langgraph dev --port 54367 --allow-blocking
This will open the LangGraph Studio in a new tab. You can close it as we won’t
require it for testing the application.
- In another terminal, start the frontend:
cd frontend
cp .env.example .env # Copy the `.env.example` file to `.env`.
npm install
npm run dev
Visit the URL http://localhost:5173 in your browser and interact with the AI agent. You can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your LangGraph FastAPI project.Explore the example app on GitHub.