A Comprehensive guide to Spring Boot 3.2 with Java 21, Virtual Threads, Spring Security, PostgreSQL, Flyway, Caching, Micrometer, Opentelemetry, JUnit 5, RabbitMQ, Keycloak Integration, and More! (10/17)
Let’s continue with our 10th medium series!
In this article, I’ll explore the fundamentals of an API built with Spring Boot. It illustrates commons features like persistence, error-handling, authentication & authorization, caching, message-brokering, rate-limiting, circuit-breaker, observability, integration testing and more.
As I found myself constantly building the same API again and again, I tried to consolidate all my knowledge and experiences into this example. By using this project as a foundation, I hope developers can enhance their development workflow, freeing themselves from the challenges of building an API from the ground up.
Table of Contents
1. Is java still relevant in 2023 ?
You might have heard (boringly) that Java is dead, but it’s actually quite the opposite. Java has never been more interesting. With innovations like GraalVM, Virtual Threads, CRaC, and a booming ecosystem, the language is thriving. Many major players depend on it and are committed to its long term success, keeping it alive and constantly innovating. While there are many good alternatives like Golang and Rust that are (mostly) faster and more efficient, Java is trying to keep pace, and it is just fascinating to watch! At the end of the day, if your application can process 18k instead of 22k requests/s, it doesn’t make much difference 95% of the time.
You can easily process millions of request per day with only 1 CPU, and a few GB of memory and it probably won’t break your billing! If you start to scale like crazy, then It might matter. At the end of the day, try choosing the framework where you, and your team, are the most productive.
2. What is Spring boot ?
Spring Boot is an open-source Java-based framework that simplifies the process of building and deploying production-ready applications. It provides a convention-over-configuration approach, reducing the need for manual setup and boilerplate code. With embedded application servers and a wide range of pre-built templates, it accelerates the development of Java applications.
Spring Boot was created to address the complexity and challenges associated with the Spring Framework. its main goal is to simplify the process of building, deploying, and managing Spring applications, making it easier for developers to create production-ready software with minimal effort and configuration.
The Spring Framework was built in 2003! It’s a big dinosaur! You can be certain that whatever problem you encounter, someone has tried to solve it, and there is a good chance a solution already exists!
3. Project structure
The project hierarchy adheres to standard Java package conventions, organized by package type. Personally, I find it challenging to begin with a fully modular approach, as initially, you may not have a complete understanding of your application. I recommend to start with a simple structure and, as your understanding matures, adapt it to your requirements. The controllers, requests and responses are organized on a per-product basis to enhance code separation and security:
├── postman
├── scripts
└── src
└── main
└── java
│ └── com
│ └── mycompany
│ └── microservice
│ └── api
│ ├── clients
│ │ ├── http
│ │ └── slack
│ ├── controllers
│ │ ├── backoffice
│ │ ├── internal
│ │ │ ├── actuator
│ │ │ ├── cloudfunctions
│ │ │ ├── cloudschedulers
│ │ │ └── integrations
│ │ ├── management
│ │ │ └── base
│ │ ├── platform
│ │ │ ├── api
│ │ │ ├── mobile
│ │ │ └── web
│ │ └── pubic
│ ├── entities
│ │ └── base
│ ├── enums
│ ├── exceptions
│ ├── facades
│ ├── infra
│ │ ├── advice
│ │ ├── auditors
│ │ ├── auth
│ │ │ ├── converters
│ │ │ ├── jwt
│ │ │ └── providers
│ │ ├── executors
│ │ ├── filters
│ │ ├── interceptors
│ │ ├── otlp
│ │ ├── ratelimit
│ │ └── security
│ ├── listeners
│ ├── mappers
│ │ ├── annotations
│ │ └── base
│ ├── rabbitmq
│ │ ├── configs
│ │ ├── listeners
│ │ └── publishers
│ ├── repositories
│ ├── requests
│ │ ├── management
│ │ └── shared
│ ├── responses
│ │ ├── management
│ │ └── shared
│ ├── services
│ │ └── base
│ └── utils
└── resources
└── db
└── migration
4. Controllers & Services
The Controllers conform to a strict naming convention, beginning with the controller’s name and extending to the last folder in the hierarchy:
They follow the same structure:
@Slf4j
@RestController
@RequestMapping(PlatformApiController.BASE_URL)
@RequiredArgsConstructor
public class PlatformApiController {
public static final String BASE_URL = AppUrls.PLATFORM_API;
@GetMapping("/hello-world")
@ResponseStatus(HttpStatus.OK)
public String helloWorld() {
return "Hello world";
}
}
They get their URL from a constant class. You don’t want to mess with your URL definition, especially when it comes to authorization.
And the Services:
@Slf4j
@Transactional(readOnly = true)
@Service
@RequiredArgsConstructor
public class CompanyService extends BaseService<Company> {
@Getter private final CompanyRepository repository;
[...]
}
By default, It sets all transactions as readOnly
to improve performance. It extends a BaseService
that implements many re-usable functions.
I personally like to put most of the business rules in the Controller to keep the service re-usable and decrease transaction length. It is a questionable preference, but If well implemented, found it to keep the code DRY, reliable and efficient.
5. Start project
Make sure you have Java 21 and all the dependencies installed and run:
make start-all
It’s going to run the docker-compose.yaml
and start the following applications:
Start playing with the API by importing the Postman definitions. Most endpoints are protected with either a Keycloak Access Token or API keys. The API key is set after importing the dev environment, and Keycloak tokens are set automatically after logging in as a user:
6. Entities
It has only two entities, Company
and ApiKey
which are strictly necessary to showcase the MVP. They extend BaseEntity
to provide auditing:
@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@SuperBuilder
@MappedSuperclass
public abstract class BaseEntity implements Serializable {
@Serial private static final long serialVersionUID = 7677353645504602647L;
@CreatedBy @Column private String createdBy;
@LastModifiedBy @Column private String updatedBy;
@CreatedDate
@Column(nullable = false, updatable = false)
private LocalDateTime createdAt;
@LastModifiedDate
@Column(nullable = false)
private LocalDateTime updatedAt;
public abstract Long getId();
}
And implement a common Entity
structure:
@Entity
@EntityListeners(AuditingEntityListener.class)
@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@SuperBuilder
@Table(name = TABLE_NAME, schema = "public")
public class Company extends BaseEntity {
public static final String TABLE_NAME = "company";
@Serial private static final long serialVersionUID = 2137607105409362080L;
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = TABLE_NAME)
@GenericGenerator(
name = TABLE_NAME,
type = BatchSequenceGenerator.class,
parameters = {
@Parameter(name = "sequence", value = TABLE_NAME + "_id_seq"),
@Parameter(name = "fetch_size", value = "1")
})
private Long id;
[...]
}
Adapt this configuration to best fit your needs.
📒 Note: Hibernate disables insert batching if you use GenerationType.IDENTITY
. By using BatchSequenceGenerator
hibernate will ask for a quantity of ID (fetch_size) to prevent any additional round trip to the database. With the default configuration, if you want to insert 5 records, Hibernate will do 5 round trips to get the 5 incremental IDs and 1 more to insert the records. With a fetch size of 5, it will only do 2. If you use UUID, be very careful as it can slow down your applications.
7. Database & Flyway
It uses Postgres 15 for persistence and flyway for managing migrations.
Flyway offers a convenient and automated way to manage database schema migrations. Flyway helps version control your database schema, making it easy to apply incremental updates as your API evolves. This ensures consistency across different instances of your API, simplifies collaboration among developers, and provides a reliable mechanism for handling database changes in a structured and reproducible manner.
Flyway setup:
- 1.1.1 Creates the tables and a few initial records to simplify API testing.
- 2.1.1 Only used for local testing, it enables logical decoding to experiment with CDC tools like Debezium.
📒 Note: Be very careful when your database grows, it might not be well suited for concurrent index creation.
8. Error handling
Error handling refers to the process of managing and responding to unexpected or erroneous situations that may occur during the execution of API requests. It involves detecting, categorizing, and appropriately addressing issues to prevent service disruptions and provide meaningful feedback to clients. Common techniques include returning standardized error codes, such as HTTP status codes, and including detailed error messages or additional information in the response payload. Effective error handling in an API contributes to improved reliability, easier troubleshooting, and a more user-friendly experience for developers integrating with the API.
It enhances security by providing a centralized place for logging and monitoring. Believe me, you don’t want to spread your error-handling logic all over your codebase!
8.1. Spring Exception
Spring 6 implements the Problem Details for HTTP APIs specification, RFC 7807, now deprecated in favor of the RFC 9457.
By using @ControllerAdvice
and extending ResponseEntityExceptionHandler
it’s very easy to implement.
For example, the @Valid
handler:
@Slf4j
@ControllerAdvice
@RequiredArgsConstructor
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
// Process @Valid
@Override
protected ResponseEntity<Object> handleMethodArgumentNotValid(
@NonNull final MethodArgumentNotValidException ex,
@NonNull final HttpHeaders headers,
@NonNull final HttpStatusCode status,
@NonNull final WebRequest request) {
log.info(ex.getMessage(), ex);
final List<ApiErrorDetails> errors = new ArrayList<>();
for (final ObjectError err : ex.getBindingResult().getAllErrors()) {
errors.add(
ApiErrorDetails.builder()
.pointer(((FieldError) err).getField())
.reason(err.getDefaultMessage())
.build());
}
return ResponseEntity.status(BAD_REQUEST)
.body(this.buildProblemDetail(BAD_REQUEST, "Validation failed.", errors));
}
private ProblemDetail buildProblemDetail(
final HttpStatus status, final String detail, final List<ApiErrorDetails> errors) {
final ProblemDetail problemDetail =
ProblemDetail.forStatusAndDetail(status, StringUtils.normalizeSpace(detail));
// Adds errors fields on validation errors, following RFC 9457 best practices.
if (CollectionUtils.isNotEmpty(errors)) {
problemDetail.setProperty("errors", errors);
}
return problemDetail;
}
translate to:
{
"type": "about:blank",
"title": "Bad Request",
"status": 400,
"detail": "Validation failed.",
"instance": "/management/companies",
"errors": [
{
"pointer": "name",
"reason": "must not be blank"
},
{
"pointer": "slug",
"reason": "must not be blank"
}
]
}
Overriding the Controller method validations, e.g. @RequestParam
, @Pathvariable
:
// Process controller method parameter validations e.g. @RequestParam, @PathVariable etc.
@Override
protected ResponseEntity<Object> handleHandlerMethodValidationException(
final @NotNull HandlerMethodValidationException ex,
final @NotNull HttpHeaders headers,
final @NotNull HttpStatusCode status,
final @NotNull WebRequest request) {
log.info(ex.getMessage(), ex);
final List<ApiErrorDetails> errors = new ArrayList<>();
for (final var validation : ex.getAllValidationResults()) {
final String parameterName = validation.getMethodParameter().getParameterName();
validation
.getResolvableErrors()
.forEach(
error -> {
errors.add(
ApiErrorDetails.builder()
.pointer(parameterName)
.reason(error.getDefaultMessage())
.build());
});
}
return ResponseEntity.status(BAD_REQUEST)
.body(this.buildProblemDetail(BAD_REQUEST, "Validation failed.", errors));
}
translate to:
{
"type": "about:blank",
"title": "Bad Request",
"status": 400,
"detail": "Validation failed.",
"instance": "/back-office/hello-world",
"errors": [
{
"pointer": "email",
"reason": "must be a well-formed email address"
}
]
}
[…]
8.2. Application exceptions
All application exceptions extend the RootException
:
@Getter
public class RootException extends RuntimeException {
@Serial private static final long serialVersionUID = 6378336966214073013L;
private final HttpStatus httpStatus;
private final List<ApiErrorDetails> errors = new ArrayList<>();
public RootException(@NonNull final HttpStatus httpStatus) {
super();
this.httpStatus = httpStatus;
}
public RootException(@NonNull final HttpStatus httpStatus, final String message) {
super(message);
this.httpStatus = httpStatus;
}
}
Again, the @ControllerAdvice
implements a global error handler:
@ExceptionHandler(RootException.class)
public ResponseEntity<ProblemDetail> rootException(final RootException ex) {
log.info(ex.getMessage(), ex);
// Uses default message, can be adapted to use ex.getMessage().
final ProblemDetail problemDetail =
this.buildProblemDetail(
ex.getHttpStatus(), API_DEFAULT_REQUEST_FAILED_MESSAGE, ex.getErrors());
return ResponseEntity.status(ex.getHttpStatus()).body(problemDetail);
}
{
"type": "about:blank",
"title": "Internal Server Error",
"status": 500,
"detail": "Request failed.",
"instance": "/back-office/hello-world"
}
8.3. Fallback exceptions
All uncaught exceptions will fallback into this handler:
@ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
@ExceptionHandler(Throwable.class)
public ProblemDetail handleAllExceptions(final Throwable ex, final WebRequest request) {
log.warn(ex.getMessage(), ex);
this.slack.notify(format("[API] InternalServerError: %s", ex.getMessage()));
return this.buildProblemDetail(HttpStatus.INTERNAL_SERVER_ERROR, API_DEFAULT_ERROR_MESSAGE);
}
{
"type": "about:blank",
"title": "Internal Server Error",
"status": 500,
"detail": "Something went wrong. Please try again later or enter in contact with our service.",
"instance": "/back-office/hello-world"
}
This is usually where you want to alert your Slack channel. The API implements the following template:
📒 Note: When alerting your Slack channel, always add a reference to the traceId to simplify your request debugging.
9. Authentication and Authorization
The API has 4 products, 6 APIs and 5 roles (skipping admin API for simplicity). 4 APIs uses JWT and 2 uses API key.
Authentication and Authorization mechanisms are provided by the Spring Security module.
9.1. JWT authentication
Automatically configured in application.yaml
:
spring:
security:
oauth2:
resourceserver:
jwt:
issuer-uri: ${SECURITY_OAUTH_ISSUER_URI}
jwk-set-uri: ${SECURITY_OAUTH_JWK_SET_URI}
9.2. JWT authorization
Keycloak roles are extracted from the JWT and mapped to their respective Spring GrantedAuthority:
private Collection<GrantedAuthority> extractRealmAccessRoles(final Jwt jwt) {
final Map<String, Collection<String>> realmAccess = jwt.getClaim(CLAIM_REALM_ACCESS);
if (realmAccess == null) {
return Collections.emptyList();
}
final Collection<String> realmAccessRoles = realmAccess.get(CLAIM_ROLES);
if (realmAccessRoles == null)
return Collections.emptyList();
}
return realmAccessRoles.stream()
.map(role -> new SimpleGrantedAuthority("ROLE_" + role))
.collect(Collectors.toSet());
}
Each user in Keycloak is assigned to a group and a sub-group, with the sub-group automatically assigning a corresponding role. For instance, the back-office group has a sub-group user that automatically allocates the role back_office_user.
The Spring Security Filter Chain then authorizes the API based on the Keycloak roles.
http.
[...]
.authorizeHttpRequests(
authorize ->
authorize
[...]
.requestMatchers(AppUrls.BACK_OFFICE + "/**")
.hasAnyRole(BACK_OFFICE_USER.getName())
[...]
9.3. API authentication
API authentication is done by mapping the Api-Key
header to the PostgreSQL table api_key.key
using the ApiKeyAuthenticationFilter
and ApiKeyAuthenticationProvider
@Slf4j
public class ApiKeyAuthenticationFilter extends AbstractAuthenticationProcessingFilter {
public ApiKeyAuthenticationFilter(
final String defaultFilterProcessesUrl, final AuthenticationManager authenticationManager) {
super(defaultFilterProcessesUrl);
this.setAuthenticationManager(authenticationManager);
}
@Override
public Authentication attemptAuthentication(
final HttpServletRequest request, final HttpServletResponse response) {
final String apiKeyHeader = request.getHeader(AppHeaders.API_KEY_HEADER);
final Optional<String> apiKeyOptional =
StringUtils.isNotBlank(apiKeyHeader) ? Optional.of(apiKeyHeader) : Optional.empty();
final ApiKeyAuthentication apiKey =
apiKeyOptional.map(ApiKeyAuthentication::new).orElse(new ApiKeyAuthentication());
return this.getAuthenticationManager().authenticate(apiKey);
}
@Slf4j
public class ApiKeyAuthenticationProvider implements AuthenticationProvider {
@Autowired private ApiKeyService apiKeyService;
@Autowired private CompanyService companyService;
@Override
public Authentication authenticate(final Authentication authentication)
throws AuthenticationException {
final String apiKeyInRequest = (String) authentication.getPrincipal();
if (StringUtils.isBlank(apiKeyInRequest)) {
throw new InsufficientAuthenticationException("api-key is not defined on request");
} else {
final Optional<ApiKey> apiKeyOptional = this.apiKeyService.findByKeyOptional(apiKeyInRequest);
if (apiKeyOptional.isPresent()) {
final ApiKey apiKey = apiKeyOptional.get();
final Company company = this.companyService.findById(apiKey.getCompanyId());
final ApiKeyDetails apiKeyDetails =
ApiKeyDetails.builder()
.id(apiKey.getId())
.companySlug(company.getSlug())
.email(company.getEmail())
.isInternal(Boolean.TRUE.equals(company.getIsInternal()))
.isPlatform(Boolean.TRUE.equals(company.getIsPlatform()))
.build();
return new ApiKeyAuthentication(
apiKey.getKey(), true, apiKeyDetails, company.getGrantedAuthoritiesFromCompanyType());
}
throw new BadCredentialsException("invalid api-key");
}
}
9.4. API authorization
The ApiKeyAuthentication
constructor uses company.getGrantedAuthoritiesFromCompanyType()
. It builds the GrantedAuthorities based on the company.is_internal
and company.is_platform
columns.
public Collection<GrantedAuthority> getGrantedAuthoritiesFromCompanyType() {
return this.getApiRolesFromCompanyType().stream()
.map(role -> new SimpleGrantedAuthority("ROLE_" + role.getName()))
.collect(Collectors.toSet());
}
private List<UserRolesEnum> getApiRolesFromCompanyType() {
final List<UserRolesEnum> roles = new ArrayList<>();
if (Boolean.TRUE.equals(this.isInternal)) {
roles.add(INTERNAL_API_USER);
}
if (Boolean.TRUE.equals(this.isPlatform)) {
roles.add(PLATFORM_API_USER);
}
return roles;
}
Again, the Spring Security Filter Chain authorizes the API:
http.
[...]
.authorizeHttpRequests(
authorize ->
authorize
[...]
.requestMatchers(AppUrls.INTERNAL + "/**")
.hasAnyRole(INTERNAL_API_USER.getName())
[...]
9.5. Company’s identity
Every user is linked to a company.slug
to identify which company’s it belongs to.
The JWT uses Keycloak user attribute. For example, the user back-office has the company slug back-office:
It is not ideal since user attribute can not be enforced at realm level. However, there is an experimental feature --features=declarative-user-profile
that solves it. It should be stable in Keycloak 24.
The ApiKeyDetail
has a field named CompanySlug
which directly hold the value.
Auth interactions are hidden behind an AuthFacade
to simplify data retrieval:
@Slf4j
@UtilityClass
public class AuthFacade {
public static Optional<String> getCompanySlugOptional() {
try {
final Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
if (isJWT(authentication)) {
return getCompanySlugFromJwt((Jwt) authentication.getPrincipal());
} else if (isApiKey(authentication)) {
return getCompanySlugFromApikey(authentication);
}
return Optional.empty();
} catch (final Exception ex) {
throw new InternalServerErrorException();
}
}
9.6. Advanced authorization
The implementation follows a Role-Based Access Control (RBAC) using two different auth mechanisms which is far from ideal but relatively simple and efficient.
For a more fine-grained authorization you can use Keycloak Attribute-Based Access Control (ABAC).
If you want to move the API key authentication and authorization to Keycloak (to keep only one mechanism), I would suggest using the Resource Owner’s Password Credentials (easier to set up but deprecated in OAuth 2) or the Client Credentials Grant (recommended). However, the Client Credentials Grant is not (yet) scalable.
10. Caching
Caching is essential for improving performance and reducing response times by storing and reusing frequently requested data. It helps alleviate the load on backend servers, enhances scalability, and contributes to a more responsive and efficient user experience
Spring Boot makes it really easy to start using a cache abstraction; you only need to add the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
It will create a ConcurrentMapCacheManager
that will sit in your application’s memory. As your application grow, it is strongly recommended to migrate to a dedicated system like Redis. The best part, you won’t need to change any of your code!
Let’s explore the project implementation:
public interface CompanyRepository extends JpaRepository<Company, Long> {
String CACHE_NAME = "company";
@NonNull
@Cacheable(value = CACHE_NAME, key = "{'byId', #id}")
@Override
Optional<Company> findById(@NonNull Long id);
@Cacheable(value = CACHE_NAME, key = "{'bySlug', #slug}")
Optional<Company> findBySlug(String slug);
@Caching(
evict = {
@CacheEvict(value = CACHE_NAME, key = "{'byId', #entity.id}"),
@CacheEvict(value = CACHE_NAME, key = "{'bySlug', #entity.slug}"),
})
@Override
<S extends Company> @NonNull S save(@NonNull S entity);
/*
* This cache implementation is only valid if the table is not
* frequently updated since it will clear the cache at every update operation
* If you want to be more performant you can use something like https://github.com/ms100/cache-as-multi
* */
@NonNull
@CacheEvict(cacheNames = CACHE_NAME, allEntries = true)
@Override
<S extends Company> List<S> saveAll(@NonNull Iterable<S> entities);
@Caching(
evict = {
@CacheEvict(value = CACHE_NAME, key = "{'byId', #entity.id}"),
@CacheEvict(value = CACHE_NAME, key = "{'bySlug', #entity.slug}"),
})
@Override
void delete(@NonNull Company entity);
/*
* This cache implementation is only valid if the table is not
* frequently updated since it will clear the cache at every delete operation
* If you want to be more performant you can use something like https://github.com/ms100/cache-as-multi
* */
@CacheEvict(cacheNames = CACHE_NAME, allEntries = true)
@Override
void deleteAll(@NonNull Iterable<? extends Company> entities);
}
It uses the @Cacheable
annotation. I personally like to scope my key to prevent any collision. Be careful when you use xAll
operations like saveAll
or deleteAll
as it will clear your cache every time it is called. It might not suit your use case.
📒 Note: If your production environment has more than one instance, you can add a @Scheduled
function to clean your cache at any interval. If it doesn’t fit your use case, consider migrating to a centralized cache instance.
11. Rate limiting
It is usually recommended to use a rate limiter at the load balancer (and/or WAF). For security purposes, the API implements a default request limiter using OncePerRequestFilter
with a default value of 50 req/s per IP address:
@Slf4j
@Component
@RequiredArgsConstructor
public class RateLimitFilter extends OncePerRequestFilter {
public static final String HEADER_RATE_LIMIT_REMAINING = "X-Rate-Limit-Remaining";
public static final String HEADER_RATE_LIMIT_RETRY_AFTER_SECONDS =
"X-Rate-Limit-Retry-After-Milliseconds";
private final DefaultRateLimit defaultRateLimit;
private final Map<String, Bucket> cache = new ConcurrentHashMap<>();
@Override
protected void doFilterInternal(
@NonNull final HttpServletRequest request,
@NonNull final HttpServletResponse response,
@NonNull final FilterChain filterChain)
throws ServletException, IOException {
final Bucket bucket = this.resolveBucket(request);
final ConsumptionProbe probe = bucket.tryConsumeAndReturnRemaining(1);
if (probe.isConsumed()) {
// Comment if you want to hide remaining request.
response.addHeader(HEADER_RATE_LIMIT_REMAINING, String.valueOf(probe.getRemainingTokens()));
filterChain.doFilter(request, response);
} else {
final long waitForRefill = probe.getNanosToWaitForRefill() / 1_000_000;
response.reset();
// Comment if you want to hide remaining time before refill.
response.addHeader(HEADER_RATE_LIMIT_RETRY_AFTER_SECONDS, String.valueOf(waitForRefill));
response.setContentType(MediaType.APPLICATION_JSON_VALUE);
response.setStatus(TOO_MANY_REQUESTS.value());
}
}
private Bucket resolveBucket(final HttpServletRequest request) {
final BaseRateLimit rateLimit = this.getRateLimitFor(request.getRequestURI());
// Compute cache based on remote address = IP address
return this.cache.computeIfAbsent(
request.getRemoteAddr(), s -> Bucket.builder().addLimit(rateLimit.getLimit()).build());
}
private BaseRateLimit getRateLimitFor(final String requestedUri) {
// Use a switch case if you want to have different rate limit per uri.
return this.defaultRateLimit;
}
}
12. Circuit Breaker
A circuit breaker is a software pattern that helps enhance system resilience in distributed applications. It monitors operations and temporarily interrupts their execution when a predefined threshold of failures is reached, preventing potential cascading failures and allowing the system to recover. This pattern is crucial for maintaining overall system stability in the face of transient faults.
In Java, you want to be very careful with your available thread, as they can quickly be depleted. Blocking code, such as an HTTP client, can impose a significant burden, especially in cases of service unavailability. This is why you want some kind of mechanism to protect them.
Integration services provide a great example — they receive a request, transform/enrich it, and await the downstream service response. If it becomes unavailable, it can quickly bloat the application and make it unresponsive.
Reactive programming can help freed the thread when using blocking code, which significantly alleviate the resource usage. However, implementing reactive code can be challenging. There is nothing easier to produce blocking reactive code. I highly recommend using BlockHound to test your reactive code, you might have some surprise! Examples can be found within the project.
Then Virtual thread arrived in Java 21 and changed everything. Your thread are almost infinite and your blocking code with long awaiting time (doing nothing) is not a problem anymore.
To work with virtual thread in Spring Boot 3.2 you just need to set spring.threads.virtual.enabled
.
However, if your application does not have any of those options, you can find an example at this link.
📒 Note: Configuring a circuit breaker can be really tricky, it involves careful consideration of parameters and thresholds. Accurately predicting its usage may be challenging, as it depends on various factors such as traffic patterns and system dynamics. It’s crucial to continually monitor and fine-tune the configuration to ensure optimal performance in handling faults.
13. Observability — Metrics & Traces
[27/03/2024] The code was updated to use OpenTelemetry: cf. Embracing OpenTelemetry: A Step-by-Step guide to transitioning from Micrometer to OpenTelemetry using Spring Boot and Buildpacks
13.1. Metrics
Spring Boot handles all the heavy lifting for you; you only need to add the following dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
Expose the Prometheus endpoint:
spring:
endpoints:
web:
exposure:
include: info, health, prometheus, metrics
And configure your monitoring system to scrape the actuator endpoint:
📒 Note: In production, be careful not to expose your management API (actuator). It is advisable to use a different port for your application and management API. By default, the project configures port 8080 for the application and port 8081 for the management API.
13.2. Tracing
Again, Spring Boot handles all the heavy lifting for you; you only need to add the following dependencies:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
<groupId>net.ttddyy.observation</groupId>
<artifactId>datasource-micrometer-spring-boot</artifactId>
<version>${datasource-micrometer.version}</version>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
<version>${opentelemetry-exporter-otlp.version}</version>
</dependency>
Every syncronous request will be automatically decorated with a traceId and spanId. To decorate async request add spring.reactor.context-propagation=true
.
Some components require additional configurations:
- WebClient -> Needs to construct the client with
WebClient.Builder
like this. - RabbitMQ -> Needs to create the factory with
factory.setObservationEnable(true)
like this. - @Async -> (When using virtual thread) Needs to create the
SimpleAsyncTaskExecutor
withtaskExecutor.setTaskDecorator(new ContextPropagatingTaskDecorator()
like this.
For testing locally, you can use Otel Desktop Viewer and update the application properties to:
management:
tracing:
sampling:
probability: 1
otlp:
tracing:
endpoint: http://localhost:4317
You should see the following:
In production, you usually want to send those traces to a distributed backend like Jaeger or Tempo.
📒 Note: Tracing is your best friend for debugging requests. You will always end up navigating through your log system, filtering by the traceId.
13.3. Metrics with tracing — Exemplar
You can also use exemplars to associate metrics with traces. You only need to set the following configuration:
management:
metrics:
distribution:
percentiles-histogram:
http:
server:
requests: true
And it will correlate your metrics:
# TYPE http_server_requests_seconds histogram
# HELP http_server_requests_seconds Duration of HTTP server request handling
http_server_requests_seconds_bucket{application="app",exception="None",method="GET",outcome="SUCCESS",status="200",uri="/",le="0.002796201"} 1.0 # {span_id="55255da260e873d9",trace_id="21933703cb442151b1cef583714eb42e"} 0.002745959 1665676383.654
📒 Note: When using Prometheus you need to add the — enable-feature=exemplar-storage flag.
14. Integration Testing
Integration testing in an API is vital for ensuring that various components and services work seamlessly together, simulating real-world scenarios. It helps identify and address issues related to the interaction between different parts of the system, ensuring the API functions as expected in an integrated environment
Many strategies exist; some prefer integration testing, while others favor unit testing or end-to-end (e2e) testing. Personally, I find unit testing to be a lightweight approach for real-life applications. Well-architected integration tests can be both efficient and fast, significantly improving bug detection before they reach your production environment.
Spring Boot makes it easy to share testing context which significantly improves performance while keeping downsides relatively low. This is why all Integration Test class extends BaseIntegrationTest
:
@ActiveProfiles("test")
@AutoConfigureMockMvc
@TestInstance(Lifecycle.PER_CLASS)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public abstract class BaseIntegrationTest {
@Container @ServiceConnection
public static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:15-alpine");
@Container public static RabbitMQContainer rabbit = new RabbitMQContainer("rabbitmq:3.12.4");
static {
setRabbitConfig(rabbit);
Startables.deepStart(postgres, rabbit).join();
}
@Autowired public MockMvc mockMvc;
@DynamicPropertySource
static void applicationProperties(final DynamicPropertyRegistry registry) {
registry.add("rabbitmq.host", rabbit::getHost);
registry.add("rabbitmq.port", rabbit::getAmqpPort);
registry.add("rabbitmq.username", () -> "user");
registry.add("rabbitmq.password", () -> "password");
}
private static void setRabbitConfig(final RabbitMQContainer rabbit) {
rabbit.withCopyFileToContainer(
MountableFile.forHostPath(getRabbitDefinition()), "/etc/rabbitmq/definitions.json");
rabbit.withCopyFileToContainer(
MountableFile.forHostPath(getRabbitConfig()), "/etc/rabbitmq/rabbitmq.conf");
}
}
By using a common context, PostgreSQL and RabbitMQ will be shared between test executions. It automatically configures the MockMvc to easily test the API. With this implementation, the marginal cost of adding a test is relatively low, usually between 5 and 25 ms.
15. RabbitMQ
RabbitMQ is a message broker software that facilitates communication between distributed systems by enabling the asynchronous exchange of data. It solves the challenge of decoupling components in a system, allowing them to communicate efficiently without being directly connected. RabbitMQ enhances the scalability, reliability, and flexibility of distributed architectures by managing the flow of messages between different parts of a system or between various systems, ensuring seamless communication and coordination.
In today’s architecture, message brokering is nearly unavoidable. Two widely utilized open-source technologies are RabbitMQ and Kafka, with the latter being notably more challenging to manage.
It implements a pretty standard configuration with 1 publisher and 1 subscriber:
@Slf4j
@Configuration
public class RabbitConfig {
public static final String RABBIT_ASYNC_EVENT_LISTENER_FACTORY = "AsyncEventListener";
public static final String RABBIT_EVENT_PUBLISHER = "EventPublisher";
@Value("${rabbitmq.host}")
private String host;
@Value("${rabbitmq.port}")
private int port;
@Value("${rabbitmq.username}")
private String username;
@Value("${rabbitmq.password}")
private String password;
@Value("${rabbitmq.listeners.event.prefetch-count}")
private Integer prefetchCount;
private ConnectionFactory connectionFactory(final String connectionName) {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setConnectionNameStrategy(conn -> connectionName);
connectionFactory.setHost(this.host);
connectionFactory.setPort(this.port);
connectionFactory.setUsername(this.username);
connectionFactory.setPassword(this.password);
return connectionFactory;
}
@Bean(name = RABBIT_ASYNC_EVENT_LISTENER_FACTORY)
public DirectRabbitListenerContainerFactory eventListenerFactory() {
final DirectRabbitListenerContainerFactory factory = new DirectRabbitListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory("api-event-listener"));
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setObservationEnabled(true);
factory.setAutoStartup(false); // started at ApplicationReadyEvent
// Needed for listener using Mono<> https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setDefaultRequeueRejected(false);
factory.setPrefetchCount(this.prefetchCount);
return factory;
}
@Bean(name = RABBIT_EVENT_PUBLISHER)
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate factory =
new RabbitTemplate(this.connectionFactory("api-event-publisher"));
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setObservationEnabled(true);
factory.setRetryTemplate(RetryTemplate.defaultInstance());
return factory;
}
}
You usually want to set the prefetch count to prevent overloading your application. We also enable the observation to propagate traces between publisher and receiver. It will add a traceparent header to the message.
📒 Note: Depending on your RabbitMQ configuration policies, you might want to create the configuration explicitly in your application. I personally like to centrally manage the configuration in RabbitMQ to prevent misconfigurations or configurations drift.
Wow, that was a pretty long series! Hope you enjoyed it and see you in series 11!
- Learning from building the tech stacks of 5 startups and giving back to the community (1/17)
- Buy your first DNS and create a GCP organization (2/17)
- Terraforming GCP folders and Organization policies (3/17)
- Terraforming GCP projects (4/17)
- Terraforming shared VPC (host & services), GCP private service access and firewall rules (5/17)
- Terraforming DNS and IAP configurations (no VPN needed!) (6/17)
- Terraforming a bastion host using IAP and a (private) Kubernetes cluster with Cilium (7/17)
- Deploying an infra stack with ArgoCD Image Updater, Cert Manager, External DNS, External Secrets Operator, Ingress-Nginx Controller, Keycloak and RabbitMQ using a self-managed ArgoCD (8/17)
- Production considerations for running the infra stack (9/17)
- A Comprehensive guide to Spring Boot 3.2 with Java 21, Virtual Threads, Spring Security, PostgreSQL, Flyway, Caching, Micrometer, Opentelemetry, JUnit 5, RabbitMQ, Keycloak Integration, and More! (10/17)
- Production considerations for running PostgreSQL and Debezium (11/17)
- Building an automatic CI/CD using Git flow with GitHub Actions, Buildpack and Artifact Registry (12/17)
- Create your own open-source observability platform using ArgoCD, Prometheus, AlertManager, OpenTelemetry and Tempo (13/17)
- Deploying Grafana dashboards for ArgoCD, Spring Boot, Cert Manager, Nginx Ingress Controller, Keycloak, RabbitMQ, Tempo and Opentelemetry (14/17)
- Deploying Prometheus Rules for Cert Manager, Kubernetes container, Kubernetes, PostgreSQL, Prometheus, Tempo, Spring Boot API […] (15/17)
- OLAP where should we start ? Data Lake ? BigQuery ? Clickhouse ? (16/17)
- Don’t fall into the microservice trap (17/17)
[…]
If you have any questions or suggestions, please, feel free to reach me on LinkedIn!
Disclaimer: Technology development is a dynamic and evolving field, and real-world results may vary. Users should exercise their judgment, seek expert advice, and perform independent research to ensure the reliability and accuracy of any actions taken based on this tutorial. The author and publication are not liable for any consequences arising from the use of the information contained herein.