Hi there! I'm Maneshwar. Right now, I’m building LiveAPI, a first-of-its-kind tool that helps you automatically index API endpoints across all your repositories. LiveAPI makes it easier to discover, understand, and interact with APIs in large infrastructures.
Integrating dynamic service discovery into NGINX is a common need in modern microservices environments.
For example, if you use Nomad and Consul for service orchestration, you want NGINX to route traffic to the right service instances discovered in Consul.
The ngx_http_consul_backend_module lets NGINX query Consul at request time.
In a location
block you can write consul $backend service-name;
to have NGINX set the variable (e.g. $backend
) to one of the healthy IP:PORTs for that service.
This means no manual NGINX reloads when backends change – each request goes to Consul and picks a random healthy instance.
As HashiCorp notes, “you will not have to restart nginx each time a change happens in the Consul services…because each request delegates to Consul, you get real-time results, and traffic will never be routed to an unhealthy host”.
A typical NGINX config snippet using this module looks like:
server {
listen 80;
server_name example.com;
location /my-service {
consul $backend my-service; # Consul service name
proxy_pass http://$backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Here the consul $backend my-service;
directive uses the Consul module to populate $backend
with IP:PORT
.
NGINX then proxies to that address. Internally, NGINX calls the C function ngx_http_consul_backend
, which dlopens the Go-based shared library, calls its symbol, and the Go code queries Consul’s API (using the official Go client) to get healthy addresses.
A random healthy IP:PORT is returned and set as the backend for proxying.
Why Use the Consul Backend Module?
Using ngx_http_consul_backend_module
is ideal for dynamic, microservices-based systems.
Instead of hardcoding upstreams or using DNS TTL hacks, NGINX can ask Consul directly for service instances.
This ensures NGINX always routes to healthy nodes (Consul filters out unhealthy ones) without needing reloads.
It effectively turns NGINX into a Consul-aware load balancer.
If you already run services in Nomad and register them in Consul, this module seamlessly bridges NGINX with Consul’s service catalog.
It’s especially useful in environments where services frequently scale up/down or change IPs.
Key benefits include:
- Real-time service discovery: Each request triggers a Consul lookup, so NGINX always has up-to-date backend info.
- Built-in health checks: Only healthy instances (as per Consul) are returned, so no separate health checks needed at NGINX.
- Automated load balancing: The module randomly picks an instance from the returned list, providing simple load distribution.
- No downtime on changes: You don’t have to reload or restart NGINX when service instances change, since lookups happen per-request.
How the Consul Backend Module Works
The high-level flow when using the consul
directive is:
- A request matches a location with
consul $var service-name
. - NGINX calls
ngx_http_consul_backend($var, "service-name")
. - The module does a
dlopen
on the Go-built.so
library and invokes the Go function. - The Go code uses the Consul API client to fetch healthy IP:PORTs for
service-name
and picks one (randomly). - The chosen address is returned and assigned to
$var
. - NGINX then proxies the request to
http://$var
(e.g. viaproxy_pass
).
This process is summarized in HashiCorp’s documentation. In particular:
- NGINX calls
ngx_http_consul_backend
, which callsdlopen
on the shared Go library…- The Go function queries Consul, compiles a list of IP:PORTs, and chooses one to return.
- The IP:PORT is returned to
ngx_http_consul_backend
, which sets the variable.
Because of this design, updates in Consul (like a new service instance) are immediately visible to NGINX.
The only requirement is having a local Consul agent or reachable Consul cluster for low-latency lookups.
Ansible Role Overview
We can automate building NGINX with this module using an Ansible role.
A good structure is:
ansible
├── ansible.cfg
├── hosts.ini
├── nginx-build-playbook.yml
└── roles
└── nginx-with-consul-module
├── tasks
│ ├── purge_deps.yml
│ ├── install_dependencies.yml
│ ├── download_sources.yml
│ ├── build_consul_backend_module.yml
│ ├── configure_build.yml
│ ├── build_nginx.yml
│ └── systemd.yml
└── templates
└── nginx.service.j2
The main.yml
in tasks/
simply imports these in order (install deps, get sources, build module, configure NGINX, compile, and set up systems).
Breaking into subtasks keeps the playbook modular and readable.
Role logic: The playbook runs on the target NGINX host(s) (e.g. a Nomad server), with privilege escalation enabled (become: yes
).
We ensure idempotency so re-running the playbook is safe (Ansible’s apt
, file
, git
, etc. are idempotent).
Step-by-Step Build Process
Below we outline the key steps the Ansible role performs.
For brevity we summarize core actions, but each step corresponds to a YAML task file as shown above.
1. Install Dependencies
First, ensure all build tools and libraries are present:
-
C compiler and make (
build-essential
). -
PCRE (
libpcre3-dev
) for NGINX regex support. -
zlib (
zlib1g-dev
) for compression support. -
OpenSSL (
libssl-dev
) for HTTPS. - Git, wget, curl to fetch source code.
Ansible example snippet (from install_dependencies.yml
):
- name: Install required packages
apt:
name:
- build-essential # gcc, g++, make
- libpcre3-dev # PCRE regex
- zlib1g-dev # compression
- libssl-dev # SSL/TLS
- git # for cloning repos
- wget # for downloads
- curl # for downloads
state: present
update_cache: true
This uses Ansible’s apt
module to install needed packages. (For RHEL/CentOS, you would use yum
or similar.)
2. Download and Extract Sources
Next, grab the source code:
- NGINX source – download the desired version (e.g. 1.24.0 or 1.23.2) from nginx.org and extract to a temp directory.
- NGX Devel Kit (NDK) – download version 0.3.0 and extract. The NDK is required because the Consul backend module depends on it for hooking into NGINX.
-
Consul backend module – clone the GitHub repo into
$GOPATH/src/github.com/hashicorp/ngx_http_consul_backend_module
.
For example:
- name: Download nginx source
get_url:
url: https://nginx.org/download/nginx-1.24.0.tar.gz
dest: /tmp/nginx.tgz
- name: Extract nginx source
unarchive:
src: /tmp/nginx.tgz
dest: /tmp/
remote_src: yes
- name: Download ngx_devel_kit (NDK)
get_url:
url: https://github.com/simpl/ngx_devel_kit/archive/v0.3.0.tar.gz
dest: /tmp/ngx_devel_kit-0.3.0.tgz
- name: Extract NDK module
unarchive:
src: /tmp/ngx_devel_kit-0.3.0.tgz
dest: /tmp/
remote_src: yes
- name: Clone Consul backend module
git:
repo: https://github.com/hashicorp/ngx_http_consul_backend_module.git
dest: /go/src/github.com/hashicorp/ngx_http_consul_backend_module
These tasks correspond to download_sources.yml
.
Small patch: The original Consul module C code uses strlen(backend)
without a cast, which newer compilers may warn about. We apply a tiny Ansible replace-task patch: replace
ngx_str_t ngx_backend = { strlen(backend), backend };
with
ngx_str_t ngx_backend = { strlen((const char*)backend), backend };
in ngx_http_consul_backend_module.c
. This resolves a type mismatch warning.
3. Build the Consul Backend Module
The Consul backend is written in Go but needs to be a C shared library (.so
). We use CGO
to compile it. Key steps:
- Ensure the target directory (e.g.
/usr/local/nginx/ext/
) exists. - In the cloned repo, run
go mod init
(if needed) andgo mod tidy
to set up Go modules. - Use
go build -buildmode=c-shared -o /usr/local/nginx/ext/ngx_http_consul_backend_module.so src/ngx_http_consul_backend_module.go
. We must include the NDK headers inCGO_CFLAGS
(so the Go compiler findsndk.h
).
An example Ansible snippet (build_consul_backend_module.yml
):
- name: Ensure nginx ext directory exists
file:
path: /usr/local/nginx/ext/
state: directory
owner: root
group: root
mode: "0755"
- name: Initialize Go modules (if not present)
command: /usr/local/go/bin/go mod init github.com/hashicorp/ngx_http_consul_backend_module
args:
chdir: /go/src/github.com/hashicorp/ngx_http_consul_backend_module
ignore_errors: true
- name: Tidy Go modules
command: /usr/local/go/bin/go mod tidy
args:
chdir: /go/src/github.com/hashicorp/ngx_http_consul_backend_module
- name: Build Go shared library for Consul backend
shell: |
export PATH=/usr/local/go/bin:$PATH
CGO_CFLAGS="-I /tmp/ngx_devel_kit-0.3.0/src" \
/usr/local/go/bin/go build -buildmode=c-shared \
-o /usr/local/nginx/ext/ngx_http_consul_backend_module.so \
./src/ngx_http_consul_backend_module.go
args:
chdir: /go/src/github.com/hashicorp/ngx_http_consul_backend_module
This compiles the Go code into /usr/local/nginx/ext/ngx_http_consul_backend_module.so
. Note that the module’s C code will dlopen
this exact path at runtime.
We must ensure the .so
is at the expected location and has appropriate permissions.
4. Configure NGINX Build
With the module’s .so
ready, we now compile NGINX itself, including the NDK and Consul module.
Use the NGINX ./configure
script with the following important options:
-
--with-debug
(optional, for debugging symbols) -
--prefix=/etc/nginx
and other--*
paths to set where NGINX should install (e.g. sbin, conf, log paths) – adjust as needed. -
--add-module=/tmp/ngx_devel_kit-0.3.0
to include the NDK. -
--add-module=/go/src/github.com/hashicorp/ngx_http_consul_backend_module
to include the Consul module source. - Other
--with-http_*
flags enable SSL, HTTP/2, etc. (as in your prompt example).
An example configure command in Ansible (configure_build.yml
):
- name: Configure NGINX build
command: >
./configure
--prefix=/etc/nginx
--sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf
--pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log
--with-http_ssl_module
--with-http_v2_module
--with-http_stub_status_module
--with-http_realip_module
--with-http_auth_request_module
--with-http_dav_module
--with-http_slice_module
--with-http_addition_module
--with-http_gunzip_module
--with-http_gzip_static_module
--with-http_sub_module
--with-mail_ssl_module
--with-stream_ssl_module
--with-debug
--add-module=/tmp/ngx_devel_kit-0.3.0
--add-module=/go/src/github.com/hashicorp/ngx_http_consul_backend_module
args:
chdir: /tmp/nginx-1.24.0
This corresponds to configure_build.yml
. The --add-module
flags attach our custom modules.
The chdir should match the extracted NGINX source directory (here /tmp/nginx-1.24.0
).
5. Compile and Install NGINX
With the Makefile
generated, run make
and make install
:
- name: Compile nginx
command: make
args:
chdir: /tmp/nginx-1.24.0
- name: Install nginx
command: make install
args:
chdir: /tmp/nginx-1.24.0
(This is build_nginx.yml
.) Once done, the new NGINX binary and files are in the prefix paths (e.g. /usr/sbin/nginx
, /etc/nginx/nginx.conf
, etc.).
Optionally install any ancillary packages (e.g. apache2-utils
for htpasswd
, as shown).
6. Systemd Integration
Finally, set up NGINX as a systemd service for easy management. Create a template (e.g. nginx.service.j2
) with content like:
[Unit]
Description=NGINX HTTP Server
After=network.target
[Service]
Type=forking
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/usr/sbin/nginx -s stop
PIDFile=/var/run/nginx.pid
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Then in systemd.yml
:
- name: Add nginx systemd service file
template:
src: nginx.service.j2
dest: /etc/systemd/system/nginx.service
notify: Reload systemd
- name: Enable and start nginx
systemd:
name: nginx
state: started
enabled: true
This ensures nginx
is enabled on boot and starts now. A handler should reload the systemd daemon after copying the unit file (e.g. systemd: daemon_reload: yes
).
Putting It All Together
With the role defined, use a playbook like nginx-build-playbook.yml
:
- name: Rebuild NGINX with Consul module
hosts: nomadserver
become: yes
roles:
- nginx-with-consul-module
Run it against your inventory:
ansible-playbook -i hosts.ini nginx-build-playbook.yml
This automates the entire flow: install deps, get sources, compile the Consul module, build NGINX with the module, and start NGINX with systemd.
Conclusion
Building NGINX from source to include a custom Consul backend module is straightforward when broken into steps.
With Ansible, we codify each part (dependencies, downloads, patches, compile, service) so it’s repeatable and maintainable.
Once set up, NGINX can dynamically route to Consul-registered services in real-time.
This pattern works not just for the Consul backend module, but for any custom NGINX module: simply download the source, add the --add-module
flag, compile, and deploy – all automated via Ansible.
LiveAPI helps you get all your backend APIs documented in a few minutes.
With LiveAPI, you can generate interactive API docs that allow users to search and execute endpoints directly from the browser.
If you're tired of updating Swagger manually or syncing Postman collections, give it a shot.
Top comments (4)
Pretty cool how you break all this down step by step, makes me wanna set it up myself tbh.
Yeah it's fun and needed :D
This is exactly the deep dive I wish I’d found a few months ago - automating away NGINX reload pain is a big win. Did you run into any latency spikes hitting Consul per request at scale?
Not yet, what about you?
What did you choose instead of consul?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.