In a previous series IoT Architectures Under Pressure, we explored a cost-effective concept for a variety of IoT devices, which we called firmware-less. The idea was based on the assumption that there’s a Hub available to run the "firmware" outside of the devices themselves.
This new post introduces that Hub and its implementation. The project is called Tinkwell, a name inspired by a blend of Tinkering and Well, it's written in C# and it’s available on GitHub. The twist is that although it was designed with IoT in mind, Tinkwell is flexible enough to be useful in other domains as well.
Note: At the time of writing, the code on GitHub is purely experimental. It is not ready for production use, it's far from finished and it does not reflect best practices, and it may change frequently (and diverge greatly from what I'm describing here). Fork it, break it, and explore freely: it is a space for ideas and learning.
An overview
The basic idea is extremely simple: an ensamble is a set of processes called runners, defined statically using a DSL in a configuration file and launched and monitored by a Supervisor.
Each runner can expose one or more gRPC services, and can consume services exposed by other runners. The logic (be it the code to control our firmware-less device, some integration or anything else) is called firmlet and each runner can contain zero, one or more of them.
The system is distributed by design: you don’t need to know where things are running. A single Supervisor can spawn runners on other machines where a slave Supervisor is active. Similarly, services can be distributed across a network: they don’t have to be local.
There are a number of predefined services (virtually all of them optional):
- Orchestrator: works in tandem with the Supervisor to start and stop runners, locally or remotely.
- Discovery: keeps track of all services in the system, handles load balancing (when available), and provides the information needed to use them.
- Store: records unit-aware measures from the system, connected sensors, runners, or devices.
-
Reducer: calculates derived measures. For example:
JournalBearingTemp = (JournalBearingTemp1 + JournalBearingTemp2) / 2 JournalBearingOilDeltaTemp = JournalBearingOilReturnLineTemp - JournalBearingOilSupplyLineTemp
Trigger: monitors the Store and applies a set of user-defined rules to trigger specific actions when certain conditions are met (e.g. alarms).
Watchdog: probes all runners exposing a
HealthCheck
service to determine their status and respond accordingly.
In addition, there are several predefined runners to simplify development:
- GrpcHost: hosts one or more gRPC services in a single process. Each service resides in a separate DLL. Useful for reducing boilerplate when writing new services.
- DllHost: hosts one or more runners located in a shared library instead of individual executables. Ideal when multiple firmlets must start/stop together or you want to avoid repetitive scaffolding.
- WasmHost: hosts a WebAssembly firmlet (probably for our firmware-less implementation).
- WebServer: a simple web interface to monitor system status and values recorded in the Store.
The system is also modular: for example, the Store works in-memory by default, but you can add a module to persist data to a database or enable a service for querying historical values. You can also write a module to delegate process supervision to systemd
(where available). Even the default services (such as the Store or the Trigger) are entirely optional. Tinkwell provides general-purpose implementations out of the box, but you're free to omit them or swap in your own custom versions to suit your needs.
Graphically:
Services interact with each other and they're monitored by the Watchdog. A simplified view of this approach:
Configuration
How do we configure this ensamble? Let's see how a (simplified for clarity)system.ensamble
might look like (here you can find syntax highlighting for VS/VSCode/vim):
// The master GrpHost exposes the Discovery service
runner discovery "Tinkwell.Bootstrapper.GrpcHost" {
service runner orchestrator "Tinkwell.Orchestrator.dll" {}
service runner health "Tinkwell.HealthCheck.dll" {}
}
runner discovery "Tinkwell.Bootstrapper.GrpcHost" {
service runner store "Tinkwell.Store.dll" {}
service runner reducer "Thinkwell.Reducer.dll" {
properties {
units: "path/to/custom-units.json"
measures: "path/to/derived-measures.json"
}
}
service runner trigger "Tinkwell.Trigger.dll" {
properties {
rules: "path/to/alerts.json"
}
}
service runner health "Tinkwell.HealthCheck.dll" {}
}
runner watchdog "Tinkwell.Watchdog" {
properties {
interval: "5 s"
report: true
measure: true
}
}
This code is probably shared among multiple deployments/setups then you could move it to, for example, a file named shared.ensamble
and then import it when needed:
import "./shared.ensamble"
runner device_controller "path/to/device_controller" {}
You can also programmatically include a runner/service according to a run-time condition, ideal for flexible setups where not all devices are always present:
runner controller "path/to/controller" if "platform = 'linux'" {}
Ensamble files are preprocessed using the Liquid template engine, you can use variables, handle conditions and loops driven by external/run-time parameters.
Scripting
In addition to their specific service, each firmlet or runner should expose a generic access mechanism to read or write a value, or to perform an action. This interface, accessible via a simple command-line application, makes shell scripting straightforward and highly flexible:
value=$(tw read /store/JournalBearingOilDeltaTemp/value)
if (( $(echo "$value >= 5" | bc -l) )); then
tw do /turbine/stop
fi
Applications
In addition to the firmware-less Hub we mentioned, this approach can be used in other applications as well:
-
Lab automation: each lab instrument (whether it’s a spectrometer, thermal controller, or motion stage) often comes with its own driver or control script. Tinkwell can supervise each of these as a separate (monitored) runner process. To add a new device is as simple as adding a new
runner
entry in the ensamble file. This opens the door to building web dashboards, remote experiment controllers, or integration with lab information systems (LIMS).- With the Store module, you can track sensor readings (voltages, temperatures, concentrations) using unit-aware types. You can log conditions, perform derived computations and broadcast changes.
- You could define workflows where launching a new test involves spinning up a sequence of runners: data collector, logger, analyzer, etc. Since runners can be composed hierarchically (one runner invoking others), you can build robust pipelines for repeatable experiments.
- Using gRPC-based discovery and command interfaces, external systems or human operators can see what's running and monitor the progress of an experiment.
- Edge and Fog Computing in industrial IoT: factories increasingly rely on edge devices to perform localized processing (e.g. anomaly detection from vibration sensors, temperature thresholds for safety cutoffs). Tinkwell offers a lightweight, resilient orchestration layer that doesn’t need containers or a full Kubernetes cluster, perfect for rugged industrial PCs at the edge.
- Test benches and automated QA stations: like in labs, automated testing environments in industrial R&D departments often involve specialized hardware setups. Scripts or binaries controlling signal generators, power supplies, or data loggers can be isolated into runners.
- Factory dashboard backends: you could use Tinkwell as the service layer behind a dashboard—runners provide data from physical devices or simulators, the Store aggregates it with unit-aware tracking, and the Watchdog ensures that data providers are functioning.
- Safety systems and watchdog layers: in systems where a process crash is a serious concern (e.g. controlling furnaces, hydraulic presses), supervision is vital. Tinkwell's crash counters and restart logic provide a built-in "watchdog-like" system, minimizing downtime and potentially averting dangerous conditions if used wisely.
Development
Since Tinkwell is stretching across some very different use cases, I’m splitting it into multiple packages: a core foundation plus domain-specific modules (like the upcoming firmware-less one).
Most of the real action will unfold over in the GitHub repo but stay tuned! Future posts will zoom in on specific pieces, especially when we dive into building out the firmware-less module.
Top comments (0)