Observability
The AI Gateway logs observability metrics related to your requests, which you can use to monitor and debug.
You can view these metrics:
You can access these metrics from the Observability tab of your Vercel dashboard by clicking AI on the left side of the Observability Overview page


When you access the AI section of the Observability tab under the team scope, you can view the metrics for all requests made to the AI Gateway across all projects in your team. This is useful for monitoring the overall usage and performance of the AI Gateway.


When you access the AI section of the Observability tab for a specific project, you can view metrics for all requests to the AI Gateway for that project.


You can also access these metrics by clicking the AI tab of your Vercel dashboard under the team scope. You can see a recent overview of the requests made to the AI Gateway in the Activity section.


The Requests by Model chart shows the number of requests made to each model over time. This can help you identify which models are being used most frequently and whether there are any spikes in usage.
The Time to First Token chart shows the average time it takes for the AI Gateway to return the first token of a response. This can help you understand the latency of your requests and identify any performance issues.
The Input/output Token Counts chart shows the number of input and output tokens for each request. This can help you understand the size of the requests being made and the responses being returned.
The Spend chart shows the total amount spent on AI Gateway requests over time. This can help you monitor your spending and identify any unexpected costs.
Was this helpful?