DEV Community

Cover image for Migrating from API keys to OAuth
Saif
Saif

Posted on

Migrating from API keys to OAuth

Why OAuth is now critical for AI workflows

Are you building anything AI-related? Like agents that call APIs, tools that chain together workflows, or a system where services talk to each other autonomously?

Then secure service-to-service auth isn't optional anymore. OAuth has become table stakes for AI workflows because:

  • Agents need access tokens to call APIs securely
  • You often have tools or services acting on behalf of each other
  • Expiring, scoped tokens are safer than static API keys

In this post, I’ll talk about how you can migrate from API keys to OAuth. I’ll include design tradeoffs, lessons learned, and sample code.

Let’s dive in.


Why OAuth? isn't that overkill?

OAuth is usually associated with user login flows. But it also has a powerful use case for machine-to-machine (M2M) authorization, especially in environments with multiple independently deployed services.

Here’s why OAuth is a better option:

  • Token expiry + rotation out of the box
  • Standard scopes + audience checks
  • Easy to plug into identity providers
  • Replaces the need for rolling your own signature/verification logic
  • Cryptographically signed credentials using standard JWTs

Still wondering if it’s worth it? If you manage any of the following:

  • Building MCP servers
  • Working on AI agents
  • Multi-tenant backend
  • Externally exposed management APIs
  • Services that talk to each other across trust boundaries

…then it’s probably time to say goodbye to long-lived API keys.


Architecture: from shared secrets to trust delegation

Here’s the before and after.

Before (API keys):

Each service had a shared secret:

  • Validate incoming requests by matching an Authorization: Token xyz123
  • Maintain a lookup table of keys → permissions
  • Risky if any key is leaked (no TTL, no revocation)

After (OAuth 2.0):

We use client credentials grant:

  • Every service is issued a client_id and client_secret
  • They exchange those for an access_token
  • Tokens have expiry, scopes, and audience claims
  • Receiving service verifies token using a shared public key (JWKS)

We can use a machine-to-machine configuration with JWT-based access tokens and RS256 signatures. Token verification is done using JWKS endpoints and cached keys.


OAuth ensures that:

  • Agents can request scoped access tokens before calling services
  • Tokens ensure requests are authenticated and time-bound
  • Scopes define exactly what each agent is allowed to do
  • Audiences prevent token reuse across unauthorized systems

MCP workflows often involve chaining tools and routing requests through a graph of services. OAuth ensures that each hop is secure, and that every agent operates under tight, auditable permissions.


Issuing tokens: service-to-service in practice

Here’s how a service requests a token:

curl --request POST \
  --url https://auth.yourdomain.com/oauth/token \
  --header 'content-type: application/json' \
  --data '{
    "client_id": "service-foo",
    "client_secret": "super-secret",
    "audience": "service-bar",
    "grant_type": "client_credentials"
}'
Enter fullscreen mode Exit fullscreen mode

You’ll get a JWT back:

{
  "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
  "expires_in": 86400,
  "token_type": "Bearer"
}
Enter fullscreen mode Exit fullscreen mode

Attach it to your requests like so:

Authorization: Bearer <access_token>


Verifying tokens: the real work begins

Token verification happens on every request. We built a lightweight middleware in Node.js to:

  • Decode the token
  • Fetch public keys from the JWKS endpoint
  • Validate signature + aud, exp, and scope claims

Here’s a simplified snippet:

const jwt = require('jsonwebtoken');
const jwksClient = require('jwks-rsa');

const client = jwksClient({
  jwksUri: 'https://auth.yourdomain.com/.well-known/jwks.json'
});

function getKey(header, callback) {
  client.getSigningKey(header.kid, (err, key) => {
    callback(null, key.getPublicKey());
  });
}
function verifyToken(token) {
  return new Promise((resolve, reject) => {
    jwt.verify(token, getKey, {
      audience: 'service-bar',
      issuer: 'https://auth.yourdomain.com/',
      algorithms: ['RS256']
    }, (err, decoded) => {
      if (err) return reject(err);
      resolve(decoded);
    });
  });
}
Enter fullscreen mode Exit fullscreen mode

Lessons learned the hard way

  • Scopes matter: Design them intentionally. Don’t just use read:all.
  • Token TTL: We use 24-hour tokens for internal services, but keep room to lower this as security posture tightens.
  • JWKS caching: Don't hit the endpoint on every request. Cache keys for 12–24 hours.
  • Rate limits: Token generation endpoints may throttle, so cache tokens on the client until they expire.

What we gained (and what we gave up)

Gained:

  • Centralized authorization management
  • Secure service-to-service trust model
  • Short-lived, signed credentials
  • Easy revocation (rotate a secret, done)

Gave up:

  • Simplicity of hardcoded API keys
  • Some extra infra plumbing
  • Initial learning curve and refactor cost

That said, the tradeoff was worth it. Security, observability, and future flexibility all got a serious upgrade.


Would I recommend it?

Yes—if you’ve outgrown simple API keys or need to support secure service-to-service authorization.

No—if you’re running 2 services on the same private subnet and don’t plan to scale.


Your turn

Have you migrated from API keys to OAuth before?

  • Did you use a third-party service or roll your own token service?
  • What was the most painful part of the transition?
  • Want a template repo for this kind of setup?

Drop your thoughts below or link to your favorite example 👇

Top comments (0)