MCP IoT Gateway — Test Report
MQTT pipeline correctness, cloud broker performance, and resilience under load
Generated 2026-02-08 • Node v25.2.1 • Darwin aarch64 • HiveMQ Cloud (us-west-2) + Local HiveMQ CE
2k/s
Cloud Sustained Ceiling
This report tests an IoT messaging gateway — middleware that ingests device telemetry over MQTT and routes it to storage backends (PostgreSQL, InfluxDB, S3). We tested against a local broker (HiveMQ CE, zero network overhead) and a managed cloud broker (HiveMQ Cloud on AWS us-west-2, ~100ms RTT). Load was escalated from baseline throughput up through deliberate overload to map the full performance envelope.
1. Test Strategy
Validate the MCP IoT Gateway's MQTT pipeline across correctness, integration, cloud readiness, performance, and resilience:
- Correctness: Topic trie wildcard matching, pipeline routing, disk-buffered store-and-forward, MCP tool API contracts
- Integration: Real PostgreSQL writes/queries, S3 (MinIO) object storage, MQTT pub/sub round-trips
- Cloud Readiness: TLS authentication, credential rejection, SQL injection safety against a live HiveMQ Cloud cluster
- Performance Envelope: Burst and sustained throughput ceilings for local vs cloud brokers
- Resilience: Message loss thresholds, latency drift, and queue saturation under stress
The test suite is structured as a pyramid: fast mocked unit tests at the base, integration tests with real databases in the middle, and expensive performance/stress tests at the top. Each tier validates a different failure mode — logic errors, integration mismatches, and capacity limits respectively. The local broker provides a network-free baseline; the cloud broker introduces TLS overhead, ~100ms round-trip latency, and real-world broker queuing behavior.
| Tier | Tests | Duration | Dependencies | Purpose |
| Unit | 86 | 0.6s | None (mocks) | Component-level correctness |
| Integration | 62 | 7.3s | Docker (HiveMQ, PG, MinIO) | Real adapter behavior |
| Topology | 43 | 0.9s | Mock MCP servers | Multi-node tool discovery |
| Cloud | 10 | 18s | HiveMQ Cloud (TLS) | Auth, security, pipeline routing |
| Benchmark | 10 | 4.2min | Local + Cloud MQTT | Baseline performance profile |
| Stress | 20 | 7.4min | Local + Cloud MQTT | Ceiling detection & breaking points |
Why MQTT QoS 0 Bottlenecks
MQTT QoS 0 is fire-and-forget — no delivery acknowledgment at the protocol level. Under load, bottlenecks emerge at predictable layers:
- TCP send buffer: Node.js writes to a kernel buffer. When it fills,
publish() blocks until space is available.
- Broker ingress queue: The broker must fan-in all publishes, match subscription filters, and dispatch. Under pressure, queue depth grows and latency increases.
- TLS overhead: Cloud connections add encryption per-packet plus ~100ms RTT. Each
await publish() waits for the TCP ACK before proceeding.
- Client-side receive buffer: The gateway's
MessageBuffer ring buffer can drop older messages if capacity is exceeded during bursts.
QoS 0 is chosen deliberately for high-throughput telemetry — sensor readings where occasional loss is acceptable but volume and speed matter. The tradeoff is that the protocol provides no retry mechanism; any loss is silent. These tests quantify exactly how much loss occurs (spoiler: none, up to 2k/s sustained) and where latency begins to degrade.
2. System Architecture
The gateway is a modular monorepo where protocol adapters (MQTT, OPC UA, Modbus, S7) ingest device data and persistence adapters (PostgreSQL, InfluxDB, S3) store it. The pipeline runtime between them applies topic-based routing rules, batches writes for efficiency, and provides disk-backed store-and-forward when a persistence target is unavailable.
Protocol Adapters Persistence Adapters
┌──────────────┐ ┌───────────────┐
│ MQTT │ │ PostgreSQL │
│ OPC UA │ │ InfluxDB │
│ Modbus TCP │ onMessage() │ S3 / MinIO │
│ Siemens S7 │───────┐ └──────┬────────┘
└──────────────┘ │ │ write(records)
▼ │
PipelineManager────────┘
│ │
TopicTrie BatchQueue
│ │
│ flush() ──▶ persistence
│
(on failure)
▼
DiskBuffer (NDJSON) ──▶ replay on retry
| Service | Image / Host | Port | Role |
| HiveMQ CE | hivemq/hivemq-ce:latest | 1883 | Local MQTT broker (no TLS, no auth) |
| PostgreSQL 17 | postgres:17 | 5432 | Persistence adapter target |
| MinIO | minio/minio | 9000 | S3-compatible object storage |
| HiveMQ Cloud | *.usw2.aws.hivemq.cloud | 8883 | Managed cloud MQTT (TLS, us-west-2) |
3. Test Execution
3.1 Unit Tests — 86/86 PASS
Isolated tests with all external dependencies mocked. Validates component logic without I/O.
| Suite | Tests | Duration | Coverage |
| TopicTrie | 25 | 9ms | MQTT 3.1.1 wildcard matching (+, #), edge cases (empty levels, Unicode, 10-level depth) |
| PipelineManager | 24 | 69ms | Message routing, batch accumulation, store-and-forward fallback, rule lifecycle |
| PipelineRegistry | 19 | 16ms | CRUD operations, persistence round-trips, config merge semantics |
| DiskBuffer | 18 | 99ms | NDJSON append/drain, segment rotation, corruption handling, concurrent access |
3.2 Integration Tests — 62/62 PASS
End-to-end validation against real Docker services. Data flows through actual network connections and storage engines.
| Suite | Tests | Duration | Coverage |
| Pipeline E2E | 7 | 5.7s | MQTT → PipelineManager → PostgreSQL/S3 — verifies data arrives in target tables/objects |
| Pipeline MCP Tools | 16 | 8ms | All 9 management tools: add/remove/update/list/enable/disable/stats/flush/configure |
| PostgreSQL Adapter | 18 | 168ms | Writes, queries, JSONB round-trips, SQL injection parameterization, schema browsing |
| S3 Adapter (MinIO) | 15 | 466ms | Date-partitioned key generation, metadata sanitization, batch writes, prefix listing |
| Store-and-Forward | 6 | 1.5s | Adapter failure → disk buffer → adapter recovery → automatic replay |
3.3 Topology Tests — 43/43 PASS
Multi-gateway network scenarios — validates tool discovery and cross-node communication across various network shapes.
| Suite | Tests | Network Shapes |
| Tools Inspector | 19 | Minimal (3-node), Small Factory (6-node) |
| Complex Scenarios | 6 | Multi-Site (9-node), Mesh-4, Generated (7–11 node) |
| Network Configuration | 10 | Validation, generation, stats computation |
| Network Harness | 5 | Start/stop, role filtering, connection mapping, failure injection |
| Topology Scenarios | 3 | Ring-6, Small Factory, Generated (11-node) |
3.4 Cloud Tests — 10/10 PASS
Live HiveMQ Cloud cluster with TLS mutual authentication (mqtts://, port 8883).
Pipeline Validation (4 tests)
TLS connection with credentialsPASS
Pub/sub round-trip latencyPASS
Cloud MQTT → local PostgreSQL (100/100)PASS
Local vs Cloud throughput comparisonPASS
Security Validation (6 tests)
Valid credentials acceptedPASS
Invalid password rejectedPASS
Empty credentials rejectedPASS
Default CA bundle TLS validationPASS
SQL injection in topic → parameterizedPASS
Metadata injection → stored as JSONBPASS
Security tests verify that malicious payloads in MQTT topics and message metadata are safely parameterized by the PostgreSQL adapter — no raw SQL execution occurs. Authentication tests confirm the broker correctly rejects invalid and empty credentials over the TLS connection.
3.5 Benchmark — 10/10 PASS
Benchmarks establish a performance baseline at moderate loads. "Burst" fires all messages as fast as possible (queue-depth stress). "Sustained" sends at a fixed rate for 15 seconds (throughput stability). Each tier runs local and cloud back-to-back for direct comparison.
| Count | Target | Sent | Recv | Dlvr% | Send msg/s | Avg Latency |
| 50 | local | 50 | 50 | 100% | 16,667 | 5 ms |
| 50 | cloud | 50 | 50 | 100% | 50,000 | 84 ms |
| 250 | local | 250 | 250 | 100% | 50,000 | 21 ms |
| 250 | cloud | 250 | 250 | 100% | 50,000 | 153 ms |
| 1,000 | local | 1,000 | 1,000 | 100% | 62,500 | 22 ms |
| 1,000 | cloud | 1,000 | 1,000 | 100% | 55,556 | 246 ms |
Delivery % = messages received / messages sent. Avg Latency = mean round-trip time (publish to receive-back). The local broker delivers in single-digit ms; cloud adds ~100-250ms due to network RTT and TLS encryption overhead. All tiers achieve 100% delivery.
| Rate/s | Target | Sent | Recv | Dlvr% | p50 | p95 | p99 | Drift | Verdict |
| 10 | local | 150 | 150 | 100% | 2 ms | 6 ms | 18 ms | +2 ms | OK |
| 10 | cloud | 150 | 150 | 100% | 104 ms | 166 ms | 171 ms | +85 ms | OK |
| 100 | local | 1,500 | 1,500 | 100% | 2 ms | 3 ms | 4 ms | +2 ms | OK |
| 100 | cloud | 1,500 | 1,500 | 100% | 132 ms | 187 ms | 199 ms | +86 ms | OK |
| 200 | local | 3,000 | 3,000 | 100% | 2 ms | 3 ms | 3 ms | +1 ms | OK |
| 200 | cloud | 3,000 | 3,000 | 100% | 125 ms | 192 ms | 205 ms | +97 ms | OK |
p50 / p95 / p99 are latency percentiles — p50 is the median, p99 captures the slowest 1% of messages. Drift compares p95 in the last 20% of the test window vs the first 20%; positive drift indicates latency is increasing over time. Low drift values here confirm the broker handles these rates without queue accumulation.
3.6 Stress to Crash — 19/20 PASS 1 TIMEOUT
Stress tests deliberately overload the system to find where degradation begins. Burst sizes escalate from 1k to 50k messages; sustained rates from 500/s to 5,000/s. The single timeout (cloud at 5k/s sustained) is an expected finding — it identifies the throughput ceiling for a single TLS connection to HiveMQ Cloud.
| Count | Target | Sent | Recv | Dlvr% | msg/s | p50 | p95 | p99 | Heap | RSS | GCs | GC ms | Verdict |
| 1k | local | 1,000 | 1,000 | 100% | 43k | 16 | 21 | 21 | 54M | 165M | 1 | 4.7 | OK |
| 1k | cloud | 1,000 | 1,000 | 100% | 45k | 401 | 465 | 466 | 48M | 180M | 3 | 17.6 | OK |
| 5k | local | 5,000 | 5,000 | 100% | 40k | 39 | 51 | 51 | 43M | 207M | 4 | 49.8 | OK |
| 5k | cloud | 5,000 | 5,000 | 100% | 31k | 427 | 502 | 503 | 64M | 219M | 2 | 32.6 | OK |
| 10k | local | 10,000 | 10,000 | 100% | 36k | 28 | 72 | 74 | 92M | 237M | 3 | 25.3 | OK |
| 10k | cloud | 10,000 | 10,000 | 100% | 51k | 562 | 787 | 795 | 120M | 307M | 2 | 23.8 | OK |
| 25k | local | 25,000 | 25,000 | 100% | 83k | 16 | 27 | 36 | 134M | 377M | 5 | 23.2 | OK |
| 25k | cloud | 25,000 | 25,000 | 100% | 110k | 979 | 1,949 | 2,083 | 101M | 352M | 7 | 30.4 | SLOW |
| 50k | local | 50,000 | 50,000 | 100% | 86k | 15 | 35 | 38 | 118M | 377M | 11 | 44.6 | OK |
| 50k | cloud | 50,000 | 50,000 | 100% | 201k | 2,323 | 4,257 | 4,546 | 79M | 365M | 13 | 55.5 | SLOW |
Heap is actively allocated V8 JavaScript memory. RSS (Resident Set Size) is total process memory including the V8 engine, OS-level buffers, and TLS state. GCs = garbage collection cycles; GC ms = cumulative pause time. Notably, zero messages were lost at any tier — HiveMQ Cloud queues rather than drops. The latency degradation at 25k+ is purely broker-side queue depth.
The p99 curve is roughly exponential — doubling burst size at the high end more than doubles tail latency. Up to 10k messages, the broker dispatches within ~800ms. Beyond that, the ingress queue saturates and messages wait in line. The inflection point is between 10k and 25k for HiveMQ Cloud's Starter tier.
| Rate/s | Target | Sent | Recv | Dlvr% | Act/s | p50 | p99 | Drift | Heap | RSS | EL p99 | GCs | Verdict |
| 500 | local | 5,000 | 5,000 | 100% | 500 | 1 | 3 | +1 | 79M | 364M | 21ms | 3 | OK |
| 500 | cloud | 5,000 | 5,000 | 100% | 500 | 101 | 193 | +93 | 62M | 365M | 21ms | 3 | OK |
| 1k | local | 10,000 | 10,000 | 100% | 1,000 | 1 | 3 | +2 | 63M | 373M | 21ms | 5 | OK |
| 1k | cloud | 10,000 | 10,000 | 100% | 1,000 | 93 | 181 | +84 | 86M | 365M | 21ms | 4 | OK |
| 2k | local | 20,000 | 20,000 | 100% | 2,000 | 1 | 3 | +2 | 115M | 391M | 22ms | 5 | OK |
| 2k | cloud | 20,000 | 20,000 | 100% | 2,000 | 88 | 180 | +83 | 168M | 416M | 21ms | 5 | OK |
| 5k | local | 50,000 | 50,000 | 100% | 5,000 | 1 | 4 | +1 | 123M | 388M | 25ms | 14 | OK |
| 5k | cloud | TIMEOUT — test exceeded 50s limit. TLS socket backpressure prevents sustaining 5k msg/s. |
EL p99 = Node.js event loop delay at the 99th percentile (measured via monitorEventLoopDelay). Values around 21ms reflect the timer resolution floor, not actual blocking. Drift compares early vs late latency; positive but stable drift indicates a fixed RTT offset rather than compounding delay. The cloud sustained ceiling is 2,000 msg/s with 100% delivery and stable latency. At 5k/s, the TLS send buffer fills faster than the network can drain it.
4. Telemetry & Instrumentation
Beyond message delivery metrics, four additional monitoring channels were instrumented to attribute performance characteristics to specific subsystems: V8 garbage collection, event loop responsiveness, memory allocation, and per-second time-series snapshots. This distinguishes between application-level bottlenecks (GC pauses, event loop blocking) and infrastructure-level limits (network, broker queuing).
| Channel | What It Measures | Why It Matters |
| Garbage Collection |
GC pause count, total pause duration, longest single pause (via PerformanceObserver) |
GC pauses are stop-the-world — the entire process freezes. Long pauses during send loops create throughput gaps and latency spikes. |
| Event Loop Delay |
p50 and p99 delay (via monitorEventLoopDelay, 20ms resolution) |
Node.js is single-threaded. If the event loop is blocked by synchronous work, incoming messages queue up and latency compounds. |
| Memory (Heap + RSS) |
V8 heap used (live JS objects) and RSS (total OS-allocated process memory) |
Heap tracks active allocations; RSS includes V8 engine, TLS buffers, and OS pages. Steady growth without GC reclamation indicates a memory leak. |
| Per-Second Snapshots |
Cumulative sent/received, heap, RSS, and event loop lag every 1 second |
Reveals temporal behavior — whether in-flight message backlog grows or stabilizes, and whether memory follows a healthy sawtooth (GC reclaims periodically). |
| Tier | Target | GC Count | Total Pause | Max Pause | Per 10k msgs | Assessment |
| 1k burst | local | 1 | 4.7 ms | 4.7 ms | 10 | Negligible |
| 1k burst | cloud | 3 | 17.6 ms | 12.8 ms | 30 | Low |
| 10k burst | cloud | 2 | 23.8 ms | 13.1 ms | 2 | Low |
| 50k burst | local | 11 | 44.6 ms | 5.0 ms | 2.2 | Healthy |
| 50k burst | cloud | 13 | 55.5 ms | 7.0 ms | 2.6 | Healthy |
| 5k/s sust. | local | 14 | 117.5 ms | 13.9 ms | 2.8 | Healthy |
| 2k/s sust. | cloud | 5 | 43.2 ms | 12.5 ms | 2.5 | Healthy |
GC is not a bottleneck. The longest single pause was 13.9ms (during 5k/s local sustained). The rate stabilizes at 2–3 collections per 10k messages regardless of throughput, indicating well-behaved memory allocation patterns.
| Tier | Target | EL p50 | EL p99 | Assessment |
| 1k burst | local | 21.1 ms | 26.4 ms | Timer resolution baseline |
| 1k burst | cloud | 21.1 ms | 33.2 ms | Slight TLS context-switch overhead |
| 50k burst | local | 21.1 ms | 24.5 ms | Stable under high throughput |
| 50k burst | cloud | 21.1 ms | 23.1 ms | No blocking |
| 5k/s sust. | local | 20.4 ms | 25.2 ms | Healthy at max local throughput |
| 2k/s sust. | cloud | 20.4 ms | 21.5 ms | Well within bounds |
Event loop stays at timer-resolution floor (~21ms) across all tiers. Even at 50k burst or 5k/s sustained, worst-case EL delay is 25–33ms. The Node.js runtime is never the bottleneck — all latency degradation is attributable to the network and broker.
| Sec | Sent | Recv | In-Flight | Heap | RSS | EL Lag |
| 1 | 1,999 | 1,837 | 162 | 98 MB | 395 MB | 21 ms |
| 2 | 3,999 | 3,823 | 176 | 125 MB | 396 MB | 21 ms |
| 3 | 5,999 | 5,825 | 174 | 108 MB | 400 MB | 21 ms |
| 4 | 7,999 | 7,825 | 174 | 136 MB | 401 MB | 21 ms |
| 5 | 10,001 | 9,813 | 188 | 118 MB | 404 MB | 21 ms |
| 6 | 12,003 | 11,827 | 176 | 146 MB | 406 MB | 21 ms |
| 7 | 14,003 | 13,641 | 362 | 128 MB | 409 MB | 22 ms |
| 8 | 16,003 | 15,739 | 264 | 156 MB | 411 MB | 22 ms |
| 9 | 18,001 | 17,679 | 322 | 139 MB | 415 MB | 22 ms |
| 10 | 20,000 | 19,773 | 227 | 167 MB | 416 MB | 22 ms |
| 11 | 20,000 | 20,000 | 0 | 168 MB | 416 MB | 22 ms |
In-Flight = sent − received at each snapshot — the number of messages currently in the network/broker pipeline. This oscillates between 162 and 362, never growing unbounded, confirming the broker dispatches at roughly the send rate. The heap oscillates in a sawtooth pattern (rises, drops on GC) with no upward drift — no memory leak. All 20,000 messages arrive by second 11.
5. Metrics & Visualizations
| Rate | p50 | p95 | p99 |
| 10/s | 104 ms | 166 ms | 171 ms |
| 25/s | 134 ms | 194 ms | 205 ms |
| 100/s | 132 ms | 187 ms | 199 ms |
| 500/s | 101 ms | 181 ms | 193 ms |
| 1k/s | 93 ms | 169 ms | 181 ms |
| 2k/s | 88 ms | 170 ms | 180 ms |
p50 actually decreases at higher rates (134ms at 25/s → 88ms at 2k/s). This is a TCP optimization effect: at higher throughput, the connection is "warm," Nagle's algorithm batches more efficiently, and TLS session caching reduces per-packet overhead. The broker handles every tier without queue buildup.
| Rate | Local Heap | Local RSS | Cloud Heap | Cloud RSS |
| 500/s | 79 MB | 364 MB | 62 MB | 365 MB |
| 1k/s | 63 MB | 373 MB | 86 MB | 365 MB |
| 2k/s | 115 MB | 391 MB | 168 MB | 416 MB |
| 5k/s | 123 MB | 388 MB | TIMEOUT | — |
Local Broker Memory
Heap range63–123 MB
RSS range364–391 MB
Growth patternSawtooth (GC reclaims)
Cloud Broker Memory
Heap range62–168 MB
RSS range365–416 MB
Growth patternSawtooth (GC reclaims)
Cloud heap at 2k/s (168 MB) is elevated compared to local (115 MB) because in-flight messages accumulate in the receive buffer while waiting for network delivery. RSS is consistently higher than heap because it includes V8 engine overhead, TLS state, and OS-level socket buffers. Both follow a sawtooth pattern (allocate → GC reclaims) with no upward trend, confirming no memory leak.
6. Findings
F1: Zero message loss at all tested rates (up to 50k burst, 2k/s sustained).
Neither local nor cloud dropped a single message when the connection remained healthy. HiveMQ Cloud (Starter tier) queues messages rather than dropping them, providing reliable delivery even under heavy load.
F2: Cloud sustained latency is flat from 10/s to 2k/s.
p50 stays between 88–134ms across the entire range. The broker is not the bottleneck — network round-trip time dominates. Latency is essentially a function of geography (client → us-west-2 → client), not load.
F3: Local HiveMQ CE adds negligible overhead.
p99 never exceeded 76ms even at 50k burst or 5k/s sustained. The pipeline runtime (TopicTrie matching, batch accumulation, disk buffer fallback) contributes sub-millisecond processing time.
F4: Cloud burst latency degrades above 10k messages.
At 25k burst: p99 = 2.1s. At 50k burst: p99 = 4.5s. The broker's ingress queue grows faster than its dispatch rate at these volumes. For typical IoT telemetry (periodic sensor readings), bursts of this size are unlikely — but this defines the upper bound.
F5: Cloud sustained ceiling is 2k msg/s (5k/s times out).
At 5k/s, the TLS socket cannot flush publishes fast enough. Each await adapter.write() blocks on the TCP ACK, and the send loop cannot maintain the target rate. This is a hard limit for a single TLS connection to HiveMQ Cloud.
F6: GC and event loop are never the bottleneck.
Max GC pause: 13.9ms. Event loop p99: 25–33ms across all tiers. The Node.js runtime processes messages far faster than the network can carry them. All observed latency degradation is attributable to network/broker queuing.
F7: Security surface is clean.
SQL injection payloads in topics and metadata are safely parameterized by the PostgreSQL adapter. Invalid/empty credentials are rejected by HiveMQ Cloud. TLS works with the default CA bundle.
F8: Store-and-forward resilience verified.
When a persistence adapter fails, records buffer to disk (NDJSON segments). Records survive process restart and replay automatically when the adapter recovers. No data loss during simulated outages.
7. Recommendations
| Priority | Recommendation | Rationale |
| HIGH |
Add receive buffer backpressure |
At 2k/s cloud, heap reaches 168MB from in-flight message accumulation. Without a cap, a long-running pipeline at high throughput could exhaust available memory. Implement a bounded buffer with a configurable high-water mark. |
| HIGH |
Implement QoS 1 for critical pipelines |
All tests used QoS 0 (no delivery acknowledgment). For alarm conditions or safety-critical telemetry, QoS 1 provides broker-level delivery confirmation with automatic retry on failure. |
| MED |
Add publish-side rate limiting |
At 50k burst, cloud p99 hits 4.5s. A configurable rate limiter (e.g., 2k/s cap) would keep tail latency under 200ms and prevent broker queue saturation. |
| MED |
Monitor heap + RSS in production |
168MB at 2k/s is acceptable, but memory scales roughly linearly with throughput. Set alerts at 256MB to provide early warning before reaching container limits. |
| MED |
Add reconnection benchmarks |
Current tests use stable connections. Real deployments face intermittent network loss. Benchmark reconnection time, message loss during reconnect, and store-and-forward handoff latency. |
| LOW |
Non-blocking publishes for QoS 0 |
Currently each publish awaits the TCP write. For QoS 0 (no acknowledgment needed), fire-and-forget without await would remove the per-message RTT bottleneck and enable 100k+ msg/s. |
8. Opportunities
These are not bugs or urgent fixes — they're architectural improvements and expanded test coverage that would strengthen the system for production deployment at scale.
Performance
- Batch publishing: Remove
await from QoS 0 writes to eliminate serial TCP round-trip overhead. Potential: 100k+ msg/s burst.
- Connection pooling: Multiple MQTT connections to the same broker enable linear throughput scaling. Two connections ≈ 2× throughput at the cloud ceiling.
- Write coalescing: Batch multiple MQTT messages into fewer TCP packets to reduce per-message overhead and kernel context switches.
Observability
- Prometheus metrics: Expose throughput, latency percentiles, buffer depth, and disk buffer size as a
/metrics endpoint for Grafana dashboards.
- Live load-test dashboard: Stream per-second snapshots to a real-time dashboard during performance tests for immediate bottleneck identification.
- Structured logging: JSON-formatted logs for pipeline events (rule matched, batch flushed, store failure, replay) compatible with log aggregation tools.
Testing
- Chaos engineering: Simulate network partitions and broker restarts during active message flow using tools like Toxiproxy.
- Multi-protocol pipelines: Test OPC UA → MQTT → InfluxDB chains for full industrial IoT validation.
- Certificate rotation: Test TLS cert expiry and rotation without downtime.
- Horizontal scaling: Test multiple gateway instances consuming the same HiveMQ Cloud topic space.