DEV Community

Cover image for Build a Real-Time Service Health Monitoring API Using Tinybird
Cameron Archer for Tinybird

Posted on

Build a Real-Time Service Health Monitoring API Using Tinybird

Monitoring the health of microservices in real-time can be a daunting task, especially when dealing with large volumes of data generated by various services and Endpoints. Tracking metrics like response times, request counts, and error rates is crucial for maintaining system reliability and performance. This tutorial will guide you through creating a real-time API for monitoring these metrics using Tinybird, a data analytics backend tailored for software developers. By leveraging Tinybird's data sources and pipes, you can efficiently process and analyze high volumes of data in real-time, enabling dynamic monitoring and alerting mechanisms. Tinybird is a data analytics backend for software developers. You use Tinybird to build real-time analytics APIs without needing to set up or manage the underlying infrastructure. Tinybird offers a local-first development workflows, git-based deployments, resource definitions as code, and features for AI-native developers. In this tutorial, we'll explore how to use Tinybird's data sources and pipes to ingest, transform, and expose service health metrics as API endpoints, providing actionable data for monitoring microservice health in real-time.

Understanding the data

Imagine your data looks like this:

{"timestamp": "2025-05-12 08:23:24", "service_name": "user-service", "endpoint": "/register", "response_time_ms": 407, "status_code": 500, "error_count": 7, "request_count": 67}
Enter fullscreen mode Exit fullscreen mode

This data represents a single record of a health metric for a microservice, capturing the timestamp, service name, specific endpoint, response time in milliseconds, the HTTP status code, error count, and request count. To store this data, you'll create Tinybird data sources. Here is how you define the schema for the service_health_metrics data source:

DESCRIPTION >
    Datasource for storing service health metrics

SCHEMA >
    `timestamp` DateTime `json:$.timestamp`,
    `service_name` String `json:$.service_name`,
    `endpoint` String `json:$.endpoint`,
    `response_time_ms` UInt32 `json:$.response_time_ms`,
    `status_code` UInt16 `json:$.status_code`,
    `error_count` UInt16 `json:$.error_count`,
    `request_count` UInt16 `json:$.request_count`

ENGINE "MergeTree"
ENGINE_PARTITION_KEY "toYYYYMM(timestamp)"
ENGINE_SORTING_KEY "timestamp, service_name, endpoint"
Enter fullscreen mode Exit fullscreen mode

This schema defines the structure and types of the data being ingested. The sorting key is particularly important as it optimizes the storage and querying of data based on the defined order, improving performance for time-series analyses. For data ingestion, Tinybird's Events API allows you to stream JSON/NDJSON events from your application frontend or backend with a simple HTTP request. This method is ideal for real-time data streaming due to its low latency. Here's an example of how to ingest data into the service_health_metrics data source:

curl -X POST "https://api.europe-west2.gcp.tinybird.co/v0/events?name=service_health_metrics&utm_source=DEV&utm_campaign=tb+create+--prompt+DEV" \
     -H "Authorization: Bearer $TB_ADMIN_TOKEN" \
     -d '{
           "timestamp": "2023-01-01 10:00:00",
           "service_name": "auth-service",
           "endpoint": "/login",
           "response_time_ms": 150,
           "status_code": 200,
           "error_count": 0,
           "request_count": 100
         }'
Enter fullscreen mode Exit fullscreen mode

Additionally, Tinybird provides other ingestion methods suitable for various data types and sources. For event/streaming data, the Kafka connector is beneficial for integrating with existing Kafka pipelines. For batch or file-based data, the Data Sources API and S3 connector offer efficient ways to bulk import historical data or batched records.

Transforming data and publishing APIs

Tinybird's pipes are used for transforming data and publishing it as API endpoints. Pipes can perform batch transformations, act as Materialized views for real-time transformations, and create scalable, efficient API endpoints for data access.

Endpoint Performance Analysis

First, let's look at the endpoint_performance pipe, which analyzes performance metrics for specific service endpoints:

DESCRIPTION >
    API to analyze endpoint performance metrics

NODE endpoint_performance_node
SQL >
    %
    SELECT 
        service_name,
        endpoint,
        avg(response_time_ms) AS avg_response_time,
        max(response_time_ms) AS max_response_time,
        min(response_time_ms) AS min_response_time,
        sum(request_count) AS total_requests,
        sum(error_count) AS total_errors,
        round(sum(error_count) / sum(request_count) * 100, 2) AS error_rate
    FROM service_health_metrics
    WHERE 1=1
    {% if defined(service_name) %}
        AND service_name = {{String(service_name, 'auth-service')}}
    {% end %}
    {% if defined(endpoint) %}
        AND endpoint = {{String(endpoint, '/login')}}
    {% end %}
    {% if defined(start_date) %}
        AND timestamp >= {{DateTime(start_date, '2023-01-01 00:00:00')}}
    {% else %}
        AND timestamp >= now() - interval 1 day
    {% end %}
    {% if defined(end_date) %}
        AND timestamp <= {{DateTime(end_date, '2023-01-02 00:00:00')}}
    {% else %}
        AND timestamp <= now()
    {% end %}
    GROUP BY service_name, endpoint
    ORDER BY avg_response_time DESC

TYPE endpoint
Enter fullscreen mode Exit fullscreen mode

This SQL query aggregates metrics by service name and endpoint, calculating the average, maximum, and minimum response times, total requests, total errors, and error rate. Query parameters allow for dynamic filtering by service name, endpoint, and date range, making the API flexible and adaptable to different monitoring needs. Here's how you could call this API:

curl -X GET "https://api.europe-west2.gcp.tinybird.co/v0/[pipes?utm_source=DEV&utm_campaign=tb+create+--prompt+DEV](https://www.tinybird.co/docs/forward/work-with-data/pipes?utm_source=DEV&utm_campaign=tb+create+--prompt+DEV)/endpoint_performance.json?token=$TB_ADMIN_TOKEN&service_name=auth-service&endpoint=/login&start_date=2023-01-01%2000:00:00&end_date=2023-01-02%2000:00:00"
Enter fullscreen mode Exit fullscreen mode

Deploying to production

Deploying your Tinybird project to production is straightforward with the tb --cloud deploy command. This command deploys your data sources, pipes, and endpoints to the Tinybird Cloud, making them scalable and ready for production use. Tinybird manages resources as code, which means you can integrate these deployments into your CI/CD pipelines, ensuring a smooth and consistent deployment process. To secure your APIs, Tinybird uses token-based authentication. Here's an example command to call your deployed endpoint:

curl -X GET "https://api.tinybird.co/v0/pipes/your_endpoint_name.json?token=YOUR_TOKEN&utm_source=DEV&utm_campaign=tb+create+--prompt+DEV"
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this tutorial, you've learned how to use Tinybird to build a real-time API for monitoring service health metrics. By creating data sources to ingest metrics data, transforming that data with pipes, and publishing it as scalable API endpoints, you can efficiently monitor microservice performance in real-time. The technical benefits of using Tinybird for this use case include efficient data ingestion, real-time data transformation, and the ability to create and deploy scalable APIs quickly. Sign up for Tinybird to build and deploy your first real-time data APIs in a few minutes. Tinybird is free to start, with no time limit and no credit card required, allowing you to experience the power of real-time data analytics firsthand.

Top comments (0)