Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:3000/auth/callback - Set Allowed Logout URLs as:
http://localhost:3000 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Prepare Next.js app
Recommended: To use a starter template, clone the Auth0 AI samples repository:git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/langchain-next-js
Install dependencies
In the root directory of your project, install the following dependencies:
@langchain/langgraph: The core LangGraph module.
@langchain/openai: OpenAI provider for LangChain.
langchain: The core LangChain module.
zod: TypeScript-first schema validation library.
langgraph-nextjs-api-passthrough: API passthrough for LangGraph.
npm install @langchain/langgraph@0.3 @langchain/openai@0.6 langchain@0.3 zod@3 langgraph-nextjs-api-passthrough@0.1
Update the environment file
Copy the .env.example file to .env.local and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.Pass credentials to the agent
You have to pass the access token from the user’s session to the agent. First, create a helper function to get the access token from the session. Add the following function to src/lib/auth0.ts://...
// Get the Access token from Auth0 session
export const getAccessToken = async () => {
const session = await auth0.getSession();
return session?.tokenSet?.accessToken;
};
Now, update the /src/app/api/chat/[..._path]/route.ts file to pass the access token to the agent:src/app/api/chat/[..._path]/route.ts
import { initApiPassthrough } from "langgraph-nextjs-api-passthrough";
import { getAccessToken } from "@/lib/auth0";
export const { GET, POST, PUT, PATCH, DELETE, OPTIONS, runtime } =
initApiPassthrough({
apiUrl: process.env.LANGGRAPH_API_URL,
baseRoute: "chat/",
bodyParameters: async (req, body) => {
if (
req.nextUrl.pathname.endsWith("/runs/stream") &&
req.method === "POST"
) {
return {
...body,
config: {
configurable: {
_credentials: {
accessToken: await getAccessToken(),
},
},
},
};
}
return body;
},
});
In this step, you’ll create a LangChain tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.src/lib/tools/user-info.ts
import { tool } from "@langchain/core/tools";
export const getUserInfoTool = tool(
async (_input, config?) => {
// Access credentials from config
const accessToken = config?.configurable?._credentials?.accessToken;
if (!accessToken) {
return "There is no user logged in.";
}
const response = await fetch(
`https://${process.env.AUTH0_DOMAIN}/userinfo`,
{
headers: {
Authorization: `Bearer ${accessToken}`,
},
}
);
if (response.ok) {
return { result: await response.json() };
}
return "I couldn't verify your identity";
},
{
name: "get_user_info",
description: "Get information about the current logged in user.",
}
);
The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Update the /src/lib/agent.ts file to add the tool to the agent.//...
import { getUserInfoTool } from "./tools/user-info";
//... existing code
const tools = [
//... existing tools
getUserInfoTool,
];
//... existing code
You need an API Key from OpenAI or another provider to use an LLM. Add that API key to your .env.local file:# ...
# You can use any provider of your choice supported by Vercel AI
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env.local accordingly.Test your application
To test the application, run npm run all:dev and navigate to http://localhost:3000.This will open the LangGraph Studio in a new tab. You can close it as we won’t
require it for testing the application.
To interact with the AI agent, you can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your project.Explore the example app on GitHub.Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:3000/auth/callback - Set Allowed Logout URLs as:
http://localhost:3000 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Prepare Next.js app
Recommended: To use a starter template, clone the Auth0 AI samples repository:git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/vercel-ai-next-js
Install dependencies
In the root directory of your project, install the following dependencies:
ai: Core Vercel AI SDK module that interacts with various AI model providers.
@ai-sdk/openai: OpenAI provider for the Vercel AI SDK.
@ai-sdk/react: React UI components for the Vercel AI SDK.
zod: TypeScript-first schema validation library.
npm install ai@4 @ai-sdk/openai@1 @ai-sdk/react@1 zod@3
Update the environment file
Copy the .env.example file to .env.local and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.In this step, you’ll create a Vercel AI tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.src/lib/tools/user-info.ts
import { tool } from "ai";
import { z } from "zod";
import { auth0 } from "../auth0";
export const getUserInfoTool = tool({
description: "Get information about the current logged in user.",
parameters: z.object({}),
execute: async () => {
const session = await auth0.getSession();
if (!session) {
return "There is no user logged in.";
}
const response = await fetch(
`https://${process.env.AUTH0_DOMAIN}/userinfo`,
{
headers: {
Authorization: `Bearer ${session.tokenSet.accessToken}`,
},
}
);
if (response.ok) {
return { result: await response.json() };
}
return "I couldn't verify your identity";
},
});
The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Vercel AI simplifies this task with the streamText() method. Update the /src/app/api/chat/route.ts file with the following code:src/app/api/chat/route.ts
//...
import { getUserInfoTool } from "@/lib/tools/user-info";
//... existing code
export async function POST(req: NextRequest) {
const request = await req.json();
const messages = sanitizeMessages(request.messages);
const tools = {
getUserInfoTool,
};
return createDataStreamResponse({
execute: async (dataStream: DataStreamWriter) => {
const result = streamText({
model: openai("gpt-4o-mini"),
system: AGENT_SYSTEM_TEMPLATE,
messages,
maxSteps: 5,
tools,
});
result.mergeIntoDataStream(dataStream, {
sendReasoning: true,
});
},
onError: (err: any) => {
console.log(err);
return `An error occurred! ${err.message}`;
},
});
}
//... existing code
You need an API Key from OpenAI or another provider to use an LLM. Add that API key to your .env.local file:# ...
# You can use any provider of your choice supported by Vercel AI
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env.local accordingly.Test your application
To test the application, run npm run dev and navigate to http://localhost:3000. To interact with the AI agent, you can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your project.Explore the example app on GitHub.Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:8000/api/auth/callback - Set Allowed Logout URLs as:
http://localhost:5173 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Prepare the FastAPI app
Recommended: Use the starter template by cloning the Auth0 AI samples repository:git clone https://github.com/auth0-samples/auth0-ai-samples.git
cd auth0-ai-samples/authenticate-users/langchain-fastapi-py
The project is divided into two parts:
backend/: contains the backend code for the Web app and API written in Python using FastAPI and the LangGraph agent.
frontend/: contains the frontend code for the Web app written in React as a Vite SPA.
Install dependencies
In the backend directory of your project, install the following dependencies:
langgraph: LangGraph for building stateful, multi-actor applications with LLMs.
langchain-openai: LangChain integrations for OpenAI.
langgraph-cli: LangGraph CLI for running a local LangGraph server.
Make sure you have uv installed and run the following command to install the dependencies:cd backend
uv sync
uv add langgraph langchain-openai "langgraph-cli[inmem]"
Update the environment file
Copy the .env.example file to .env and update the variables with your Auth0 credentials. You can find your Auth0 domain, client ID, and client secret in the application you created in the Auth0 Dashboard.Pass credentials to the agent
First, you have to pass the access token from the user’s session to the agent. The FastAPI backend will proxy requests to the LangGraph server with the user’s credentials.Update the API route to pass the access token to the agent in app/api/routes/chat.py:# ...
from app.core.auth import auth_client
# ...
@agent_router.api_route(
"/{full_path:path}", methods=["GET", "POST", "DELETE", "PATCH", "PUT", "OPTIONS"]
)
async def api_route(
request: Request, full_path: str, auth_session=Depends(auth_client.require_session)
):
try:
# ... existing code
# Prepare body
body = await request.body()
if request.method in ("POST", "PUT", "PATCH") and body:
content = await request.json()
content["config"] = {
"configurable": {
"_credentials": {
"access_token": auth_session.get("token_sets")[0].get(
"access_token"
),
}
}
}
body = json.dumps(content).encode("utf-8")
# ... existing code
In this step, you’ll create a LangChain tool to make the first-party API call. The tool fetches an access token to call the API.In this example, after taking in an Auth0 access token during user login, the tool returns the user profile of the currently logged-in user by calling the /userinfo endpoint.Create a user info tool in app/agents/tools/user_info.py:app/agents/tools/user_info.py
import httpx
from langchain_core.tools import StructuredTool
from langchain_core.runnables.config import RunnableConfig
from pydantic import BaseModel
from app.core.config import settings
class UserInfoSchema(BaseModel):
pass
async def get_user_info_fn(config: RunnableConfig):
"""Get information about the current logged in user from Auth0 /userinfo endpoint."""
# Access credentials from config
if "configurable" not in config or "_credentials" not in config["configurable"]:
return "There is no user logged in."
credentials = config["configurable"]["_credentials"]
access_token = credentials.get("access_token")
if not access_token:
return "There is no user logged in."
try:
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://{settings.AUTH0_DOMAIN}/userinfo",
headers={
"Authorization": f"Bearer {access_token}",
},
)
if response.status_code == 200:
user_info = response.json()
return f"User information: {user_info}"
else:
return "I couldn't verify your identity"
except Exception as e:
return f"Error getting user info: {str(e)}"
get_user_info = StructuredTool(
name="get_user_info",
description="Get information about the current logged in user.",
args_schema=UserInfoSchema,
coroutine=get_user_info_fn,
)
The AI agent processes and runs the user’s request through the AI pipeline, including the tool call. Update the app/agents/assistant0.py file to add the tool to the agent:# ...
from app.agents.tools.user_info import get_user_info
tools = [get_user_info]
llm = ChatOpenAI(model="gpt-4.1-mini")
# ... existing code
agent = create_react_agent(
llm,
tools=ToolNode(tools, handle_tool_errors=False),
prompt=get_prompt(),
)
You need an API Key from OpenAI to use the LLM. Add that API key to your .env file:# ...
OPENAI_API_KEY="YOUR_API_KEY"
If you use another provider for your LLM, adjust the variable name in .env accordingly.Test your application
To test the application, start the FastAPI backend, LangGraph server, and the frontend:
- In a new terminal, start the FastAPI backend:
cd backend
source .venv/bin/activate
fastapi dev app/main.py
- In another terminal, start the LangGraph server:
cd backend
source .venv/bin/activate
uv pip install -U langgraph-api
langgraph dev --port 54367 --allow-blocking
This will open the LangGraph Studio in a new tab. You can close it as we won’t
require it for testing the application.
- In another terminal, start the frontend:
cd frontend
cp .env.example .env # Copy the `.env.example` file to `.env`.
npm install
npm run dev
Visit the URL http://localhost:5173 in your browser and interact with the AI agent. You can ask questions like "who am I?" to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Deepu Sasidharan. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your LangGraph FastAPI project.Explore the example app on GitHub.Prerequisites
Before getting started, make sure you have completed the following steps:
Create an Auth0 Application
Go to your
Auth0 Dashboard to create a new Auth0 Application.
- Navigate to Applications > Applications in the left sidebar.
- Click the Create Application button in the top right.
- In the pop-up select Regular Web Applications and click Create.
- Once the Application is created, switch to the Settings tab.
- Scroll down to the Application URIs section.
- Set Allowed Callback URLs as:
http://localhost:3000/auth/callback - Set Allowed Logout URLs as:
http://localhost:3000 - Click Save in the bottom right to save your changes.
To learn more about Auth0 applications, read
Applications.
Start from our Cloudflare Agents template
Our Auth0 Cloudflare Agents Starter Kit provides a starter project that includes the necessary dependencies and configuration to get you up and running quickly.To create a new Cloudflare Agents project using the template, run the following command in your terminal:npx create-cloudflare@latest --template auth0-lab/cloudflare-agents-starter
About the dependencies
The start kit is similar to the Cloudflare Agents starter kit but includes the following dependencies to integrate with Auth0 and Vercel AI:
hono: Hono Web Application framework.
@auth0/auth0-hono: Auth0 SDK for the Hono web framework.
hono-agents: Hono Agents to add intelligent, stateful AI agents to your Hono app.
@auth0/auth0-cloudflare-agents-api: Auth0 Cloudflare Agents API SDK to secure Cloudflare Agents using bearer tokens from Auth0.
@auth0/ai: Auth0 AI SDK to provide base abstractions for authentication and authorization in AI applications.
@auth0/ai-vercel: Auth0 Vercel AI SDK to provide building blocks for using Auth for GenAI with the Vercel AI SDK.
@auth0/ai-cloudflare: Auth0 Cloudflare AI SDK to provide building blocks for using Auth for GenAI with the Cloudflare Agents API.
You don’t need to install these dependencies manually; they are already included in the starter kit.npm remove agents
npm install hono \
@auth0/auth0-hono \
hono-agents \
@auth0/auth0-cloudflare-agents-api \
@auth0/ai-cloudflare \
@auth0/ai-vercel \
@auth0/ai
Set up environment variables
In the root directory of your project, copy the .dev.vars.example into .dev.vars file and configure the Auth0 and OpenAI variables.In this step, you’ll create a Vercel AI tool to make the first-party API call to the Auth0 API. You will do the same for third-party APIs.After taking in an Auth0 access token during user login, the Cloudflare Worker sends the token to the Cloudflare Agent using the Authorization header in every web request or WebSocket connection.Since the Agent defined in the class Chat in src/agent/chat.ts uses the AuthAgent trait from the @auth0/auth0-cloudflare-agents-api it validates the signature of the token and that it matches the audience of the Agent.The tool we are defining here uses the same access token to call Auth0’s /userinfo endpoint.const getUserInfoTool = tool({
description: "Get information about the current logged in user.",
parameters: z.object({}),
execute: async () => {
const { agent } = getCurrentAgent<Chat>();
const tokenSet = agent?.getCredentials();
if (!tokenSet) {
return "There is no user logged in.";
}
const response = await fetch(
`https://${process.env.AUTH0_DOMAIN}/userinfo`,
{
headers: {
Authorization: `Bearer ${tokenSet.access_token}`,
},
}
);
if (response.ok) {
return { result: await response.json() };
}
return "I couldn't verify your identity";
},
});
Then in the tools export of the src/agent/chat.ts file, add the getUserInfoTool to the tools array:export const tools = {
// Your other tools...
getUserInfoTool,
};
Test your application
To test the application, run npm run start and navigate to http://localhost:3000/ and interact with the AI agent. You can ask questions like “who am I?” to trigger the tool call and test whether it successfully retrieves information about the logged-in user.User: who am I?
AI: It seems that there is no user currently logged in. If you need assistance with anything else, feel free to ask!
User: who am I?
AI: You are Juan Martinez. Here are your details: - .........
That’s it! You’ve successfully integrated first-party tool-calling into your project.Explore the start kit on GitHub.