If you've tried to build an authenticated app for ChatGPT, you've probably discovered that it's not straightforward. The MCP (Model Context Protocol) OAuth specification introduces concepts that differ significantly from traditional OAuth implementations.
In this guide, we'll break down exactly how MCP OAuth works, why it's designed this way, and how you can implement it without the headaches.
The Authentication Landscape for AI Apps
When ChatGPT connects to external tools, there are actually two separate authentication concerns:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ ChatGPT │────────>│ MCP Server │────────>│ Upstream API │
│ (AI Client) │ │(Resource Server)│ │ (Figma, Stripe) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
MCP OAuth Validates API Keys
(RFC 9728) Tokens Bearer Tokens
- MCP Client Authentication: How ChatGPT authenticates to your MCP server
- Upstream API Authentication: How your MCP server authenticates to the APIs it wraps
These are completely independent systems, and conflating them is the source of most confusion.
Why MCP OAuth Is Different
Traditional OAuth has your server acting as an OAuth Client - you redirect users to Google/GitHub, receive a code, exchange it for tokens, and store those tokens.
MCP OAuth flips this entirely. Your MCP server is an OAuth Resource Server (RFC 9728). It doesn't manage OAuth sessions at all. Instead:
- Your MCP server tells clients "go to this Authorization Server for tokens"
- ChatGPT handles the entire OAuth dance with that Authorization Server
- ChatGPT sends the resulting token to your MCP server
- Your server validates the token and executes tools
This design has profound implications:
- No token storage needed: Your server never stores OAuth tokens
- No refresh logic: ChatGPT handles token refresh
- Simpler server code: Just validate incoming tokens
- Better security: Tokens flow through dedicated OAuth infrastructure
The MCP OAuth Flow in Detail
Let's walk through exactly what happens when a user connects their ChatGPT to your MCP server:
Step 1: Discovery
ChatGPT first fetches your server's Protected Resource Metadata:
GET /.well-known/oauth-protected-resource HTTP/1.1
Host: your-mcp-server.com
Your server responds with:
{
"resource": "https://your-mcp-server.com",
"authorization_servers": [
{ "issuer": "https://auth.example.com" }
],
"scopes_supported": ["tools:read", "tools:execute"],
"bearer_methods_supported": ["header"]
}
Step 2: Authorization Server Discovery
ChatGPT then fetches the Authorization Server's metadata:
GET /.well-known/oauth-authorization-server HTTP/1.1
Host: auth.example.com
This returns standard OAuth metadata including authorization and token endpoints.
Step 3: Client Registration
Many MCP clients use Dynamic Client Registration (DCR), and ChatGPT is the most important example. Other clients may prefer a preregistered public client or a client ID metadata document (CIMD). The protected-resource model stays the same; the client identity mechanism varies.
For a DCR-first client, the flow looks like this:
POST /register HTTP/1.1
Host: auth.example.com
Content-Type: application/json
{
"client_name": "ChatGPT",
"redirect_uris": ["https://chat.openai.com/callback"],
"grant_types": ["authorization_code", "refresh_token"],
"response_types": ["code"],
"token_endpoint_auth_method": "client_secret_basic"
}
The Authorization Server responds with credentials:
{
"client_id": "generated-client-id",
"client_secret": "generated-client-secret",
"client_id_issued_at": 1705012345
}
Step 4: User Authorization
Now ChatGPT redirects the user to your Authorization Server's authorize endpoint. The user logs in, grants permissions, and is redirected back to ChatGPT with an authorization code.
Step 5: Token Exchange
ChatGPT exchanges the code for tokens:
POST /token HTTP/1.1
Host: auth.example.com
Content-Type: application/x-www-form-urlencoded
Authorization: Basic <base64(client_id:client_secret)>
grant_type=authorization_code&code=...&redirect_uri=...
Step 6: Tool Execution
Finally, ChatGPT calls your MCP server with the Bearer token:
POST /mcp HTTP/1.1
Host: your-mcp-server.com
Authorization: Bearer eyJhbGciOiJSUzI1NiIs...
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": { "name": "get_user", "arguments": {} }
}
Your server validates the token and executes the tool.
What Your Server Actually Needs to Do
Given this flow, your MCP server's responsibilities are surprisingly minimal:
- Serve Protected Resource Metadata at
/.well-known/oauth-protected-resource - Validate incoming Bearer tokens (verify signature, check expiry, validate claims)
- Return proper 401 responses with
WWW-Authenticateheaders when auth fails
That's it. No OAuth client code, no token storage, no refresh logic.
The Hidden Complexity
While the protocol is well-designed, implementing it correctly involves nuances:
Token Validation
You need to validate JWTs properly:
const validateToken = async (token: string) => {
// 1. Fetch JWKS from Authorization Server
const jwks = await fetchJWKS(authServerUrl + '/.well-known/jwks.json');
// 2. Verify signature
const decoded = await verifyJWT(token, jwks);
// 3. Check expiry
if (decoded.exp < Date.now() / 1000) {
throw new Error('Token expired');
}
// 4. Validate issuer
if (decoded.iss !== authServerUrl) {
throw new Error('Invalid issuer');
}
// 5. Validate audience (should be your resource URL)
if (decoded.aud !== resourceUrl) {
throw new Error('Invalid audience');
}
return decoded;
};
WWW-Authenticate Headers
When authentication fails, you must return proper headers:
return new Response(JSON.stringify({
error: 'unauthorized',
error_description: 'Bearer token required'
}), {
status: 401,
headers: {
'WWW-Authenticate': `Bearer resource_metadata="${resourceMetadataUrl}"`
}
});
Scope Enforcement
Different tools might require different scopes:
const requireScope = (token: DecodedToken, required: string) => {
const scopes = token.scope?.split(' ') || [];
if (!scopes.includes(required)) {
throw new Error(`Missing scope: ${required}`);
}
};
Authorization Server Requirements
Your Authorization Server must support:
- OAuth 2.0 Authorization Code Flow with PKCE
- OAuth 2.0 Authorization Server Metadata (RFC 8414)
- JSON Web Keys for token validation
For best cross-client compatibility, it should also expose:
- Dynamic Client Registration (RFC 7591) for DCR-first clients
- Client ID Metadata Document support for CIMD-capable clients
- Resource Indicators so clients can request the exact MCP resource audience
Many OAuth providers support this out of the box:
| Provider | DCR Support | Notes |
|---|---|---|
| Auth0 | Yes | Enable in application settings |
| Okta | Yes | Configure in Authorization Server |
| Keycloak | Yes | Built-in support |
| AWS Cognito | Partial | Requires Lambda for DCR |
| No | Not suitable for MCP OAuth |
How Emcy Simplifies This
Building this by hand is error-prone, especially once you need to support more than one client registration strategy. Emcy handles the standards-first path across preregistered clients, CIMD, and DCR.
The canonical reference is the Todo sample app, which validates:
- local preregistered Emcy agent auth
- public Emcy agent auth with CIMD
- public Emcy agent auth with DCR fallback
- VS Code against the same public Todo deployment
- embedded first-party
getToken
Emcy generates MCP servers that handle the protected-resource side for you:
npx @emcy/openapi-to-mcp generate \
--url https://api.figma.com/openapi.json \
--auth oauth2 \
--oauth-server https://auth.example.com
The generated server includes:
- Protected Resource Metadata endpoint
- Token validation middleware
- Proper WWW-Authenticate headers
- Scope enforcement per tool
- Audience validation using
MCP_RESOURCE_URL - Environment variable configuration for forwarding the user token upstream
On the client side, Emcy discovers auth metadata, chooses preregistered/CIMD/DCR as needed, sends the resource parameter, and caches tokens by auth-server/resource/callback/mode.
Separating MCP Auth from Upstream Auth
Remember, MCP OAuth is only for authenticating ChatGPT to your MCP server. If your API (Figma, Stripe, etc.) also requires authentication, that's handled separately via environment variables:
# MCP OAuth - for ChatGPT authentication
OAUTH_AUTHORIZATION_SERVER=https://auth.example.com
MCP_RESOURCE_URL=https://your-mcp-server.com
# Upstream API Auth - for calling the actual API
UPSTREAM_API_KEY=your-figma-api-key
These are independent concerns. You might have:
- MCP OAuth for user authentication to your server
- API Key for accessing a third-party API
- No auth for a public API
Emcy's wizard lets you configure both separately.
Security Best Practices
- Always validate tokens server-side: Never trust client-provided claims
- Use short token lifetimes: Let ChatGPT handle refresh
- Validate audience claims: Ensure tokens are meant for your server
- Log authentication failures: Monitor for attacks
- Use HTTPS everywhere: Tokens must be transmitted securely
Conclusion
MCP OAuth works best when you treat your MCP server as a protected resource, publish the right metadata, and keep client registration strategy flexible. That is the key distinction from traditional app-owned OAuth.
With Emcy, you can generate the protected-resource implementation, validate it against the Todo sample app, and then reuse the same deployment across Emcy agent, VS Code, and DCR-first clients like ChatGPT.
Ready to build your authenticated MCP server? Start with the Emcy wizard and have a ChatGPT-compatible server running in minutes.
Related reading:
