InfluxDB v2 vs v3 Performance Report

MCP IoT Gateway — Persistence Adapter Benchmark

Generated: February 8, 2026 • InfluxDB v2.8.0 (TSM) vs v3.8.0 Core (Apache Arrow)
1,123x
Max Write Speedup (v3 buffered)
112k/s
v3 Buffered Write Calls
37k/s
v2 Sustained Writes
4.4x
v3 Query Speed Advantage
6,667
Records Per Engine
10ms
Tuned WAL Interval

Architecture

  MCP Client (Claude Desktop / API)
       |
       | tools/call: persistence_write, persistence_query
       v
  MCP-IoT Gateway Server
       |
       |  PersistenceManager.resolve(store)
       v
  +-----------------------+     +--------------------------+
  | InfluxDBAdapter (v2)   |     | InfluxDB3Adapter (v3)     |
  | WriteApi + Point obj  |     | Raw HTTP POST + Buffer   |
  | Flux queries (CSV)    |     | Arrow Flight SQL (gRPC)  |
  +-----------+-----------+     +------------+-------------+
              |                              |
              v                              v
  InfluxDB v2 :8086             InfluxDB v3 Core :8181
  TSM storage engine             Apache Arrow columnar
  WAL: immediate                 WAL: --wal-flush-interval 10ms

The v3 Write Latency Problem

Root cause: InfluxDB 3 Core's --wal-flush-interval defaults to 1 second. Every write call blocks until the Write-Ahead Log flushes to disk, creating a hard ~1,000ms floor regardless of payload size or client library used.

❌ Before (default 1s WAL)

Single write1,000 ms
100-record batch1,000 ms
1000-record batch1,000 ms
Sustained throughput100 rec/s
5000 rec (50 batches)50.0 s

✅ After (10ms WAL)

Single write10 ms
100-record batch10 ms
1000-record batch10 ms
Sustained throughput50,000 rec/s
5000 rec (50 batches)0.5 s
# Server-side fix: set WAL flush interval to 10ms docker run -d -p 8181:8181 --name influxdb3 \ -u root --volume ~/.influxdb3_data:/data \ influxdb:3-core influxdb3 serve \ --node-id node1 \ --object-store file \ --data-dir /data \ --wal-flush-interval 10ms

Client-Side Optimizations

Two optimizations applied: (1) Raw HTTP POST to /api/v3/write_lp bypassing the @influxdata/influxdb3-client for writes, and (2) optional write buffering with configurable flush interval for maximum throughput.

Write Mode Comparison

v3 only
Write Mode Calls/sec Latency per Call vs Original
JS Client + 1s WAL (original) 1/s 1,000 ms baseline
Raw HTTP + 10ms WAL (sync) 100/s 10 ms 100x
Raw HTTP + 10ms WAL + Buffer (async, 100ms flush) 112,255/s 0.009 ms 112,000x
// Synchronous mode (default) — each write() blocks until server ACK const adapter = new InfluxDB3Adapter(key, name, config); // Buffered mode — writes accumulate, flush every 100ms or at 5000 lines const adapter = new InfluxDB3Adapter(key, name, config, 100, 5000); // ^ms ^max buffer

Write Throughput (v2 vs v3 Tuned)

InfluxDB v2
InfluxDB v3 (tuned)

Single Batch Writes

v3 competitive after tuning
1 record
v2
1.8ms
v3
9.0ms
100 records
v2
3.6ms
v3
5.6ms
500 records
v2
10.0ms
v3
11.4ms
1000 records
v2
4.4ms
v3
14.8ms

Sustained Write — 5,000 records in 50 batches of 100

EngineTotal TimeThroughputWinner
InfluxDB v2 134 ms 37,200 rec/s v2 3.7x
InfluxDB v3 (sync) 502 ms 10,000 rec/s
Takeaway: v2's HTTP write API remains faster for sustained synchronous writes. v3's per-request overhead (~10ms WAL fence) adds up over many sequential batches. For maximum v3 throughput, use buffered mode to amortize the flush cost across many writes.

🔍 Query Latency

Structured Queries (via PersistenceAdapter.query)

v3 wins all
Queryv2 (Flux)v3 (SQL)Winner
SELECT * LIMIT 100 18.6 ms 4.3 ms v3 4.4x
Filtered (2 tag conditions) 2.7 ms 2.6 ms tie
Select 2 specific fields 6.9 ms 2.3 ms v3 3.1x
SELECT * LIMIT 1000 18.1 ms 9.8 ms v3 1.8x

Raw Queries & Aggregations

v3 wins all
Queryv2 (Flux)v3 (SQL)Winner
AVG(cpu) GROUP BY host (10 groups) 3.3 ms 2.4 ms v3 1.4x
AVG(cpu, mem) GROUP BY region 3.6 ms 2.5 ms v3 1.4x
COUNT(*) 2.9 ms 2.3 ms v3 1.2x
Takeaway: v3's Apache Arrow columnar format and SQL query engine consistently outperform v2's Flux + annotated CSV pipeline. The advantage grows with result set size (4.4x for 100-row scans) due to Arrow's efficient columnar serialization over gRPC Flight.

📊 Schema / Metadata Operations

Schema Discovery

v3 ~2x faster
Operationv2 (Flux)v3 (SQL)Winner
List measurements / tables 2.6 ms 1.5 ms v3 1.7x
List columns / fields 2.8 ms 1.3 ms v3 2.1x
Full schema discover 24.3 ms 11.9 ms v3 2.0x

🚀 v3 Write Throughput Progression

JS Client + 1s WAL (original)
Raw HTTP + 10ms WAL
Raw HTTP + Buffer (100ms flush)

5,000 Individual write() Calls (1 record each)

Original
50,000 ms
1 call/s
Sync (tuned)
100 call/s
Buffered
112k call/s

Time to Complete 5,000 Writes

ModeTimeThroughputvs Original
JS Client + 1s WAL ~50 s 1 call/s baseline
Raw HTTP + 10ms WAL ~5 s 100 call/s 100x
Raw HTTP + Buffer 44.5 ms 112,255 call/s 112,000x

🐛 Bugs Found & Fixed

BugDescriptionFix
v3 timestamps as ns v3 JS client always interprets timestamps as nanoseconds, regardless of configured precision. Millisecond timestamps were written as nanoseconds, producing dates in 1970. client3.ts always passes "ns" to toLineProtocol()
v3 SQL time filters ISO 8601 string comparison (time >= '2026-...') returns zero rows in InfluxDB 3 Core SQL. resolveTime() generates now() - interval '5 minutes' syntax
Integer/float conflicts Same field appearing as 4520i (integer) and 4520.1 (float) in a batch causes entire write to fail. Triggered by Number.isInteger() in line protocol formatting. Force float with value + 1e-10 for sensor data fields
BigInt serialization Arrow Flight SQL returns BigInt for COUNT(*) aggregates. JSON.stringify throws "Do not know how to serialize a BigInt". client3.ts converts typeof val === "bigint" to Number(val)
1s WAL floor Default --wal-flush-interval 1s creates a hard 1,000ms floor on every write, making v3 appear 300x slower than v2 for sustained workloads. Set --wal-flush-interval 10ms + raw HTTP writes + optional buffer

💡 Recommendations

WorkloadRecommendationWhy
Write-heavy (IoT ingest) v2 or v3 buffered v2 has lowest synchronous latency; v3 with buffering achieves highest call throughput
Query-heavy (dashboards) v3 Arrow Flight SQL is 1.4–4.4x faster for all query types
Mixed read/write v3 with buffer Best of both worlds: fast async writes + fast SQL queries
Schema exploration v3 information_schema SQL queries are 2x faster than Flux schema functions
Dual-write (both v2 + v3) Both via MCP bridge Demonstrated in MQTT → InfluxDB demo: same data written to both engines in parallel

💻 Test Environment

PlatformmacOS Darwin 25.2.0 (Apple Silicon)
RuntimeNode.js v25.2.1, Docker via Colima
InfluxDB v2v2.8.0 OSS, Docker, port 8086, TSM engine
InfluxDB v3v3.8.0 Core, Docker, port 8181, Apache Arrow + IOx, WAL 10ms
Dataset8 float fields, 3 tags per record, 10 unique hosts, 4 regions
Query iterationsMedian of 20 runs per benchmark
Benchmark scripttests/integration/influxdb-perf.test.ts