Background
We’re enterprise users who recently started considering our options after InfluxData limited features in their V3 core offering. Our application tracks system metrics in a radar control system, writing data at a modest rate (approximately 4 flushes/second totaling ~612 rows). Each DB instance will be self-contained on the same system the controller runs on. Resource usage is extremely modest without QuestDB in use.
Problem
Despite this relatively low write volume, QuestDB is consuming nearly all available CPU on our 12-core test VM during write operations, leaving insufficient resources for other processes. According to QuestDB performance benchmarks, our write volume should be well within capabilities and we have not changed the amount of data going into Quest vs what we were doing with InfluxV1. Having this similar approach with Influx, you never even noticed it looking at htop or top for example and the system was very usable.
Implementation Details
Schema Approach
We’re using a table-per-metric approach (similar to InfluxDB V1), where each metric gets its own table:
CREATE TABLE IF NOT EXISTS ${metricName} (
timestamp TIMESTAMP,
value ${dataType}, -- BOOLEAN, STRING, or DOUBLE based on metric type
flags INT,
faultable BOOLEAN,
faulted BOOLEAN,
interlock STRING
) TIMESTAMP(timestamp)
PARTITION BY HOUR
TTL ${ttlValue}
WAL
DEDUP UPSERT KEYS(timestamp);
Data Insertion Approach
- Using the QuestDB Node.js client (
@questdb/nodejs-client
) - ILP (InfluxLine Protocol) over HTTP for data ingestion
- We get an entire sampled report with all system metrics contained within around every 250ms
- Each metric is inserted separately (table per metric)
- We’re using
auto_flush=off
and handling flushes manually
Code Sample
Our insertion logic looks like:
async insertBite(timestamp, items) {
// Create tables if needed (basically a no-op after first BITE report comes in)
const tableCreateSuccess = await this.createBiteTables(items);
if (tableCreateSuccess) {
// Write each item to its own table
for (const item of items) {
this.sender.table(item.name);
// Set column values based on data type
switch (item.data_type) {
case 0: this.sender.booleanColumn("value", Boolean(item.value)); break;
case 3: this.sender.stringColumn("value", item.value); break;
default: this.sender.floatColumn("value", item.value);
}
this.sender.intColumn("flags", item.flags);
this.sender.booleanColumn("faultable", Boolean(item.faultable));
this.sender.booleanColumn("faulted", Boolean(item.faulted));
this.sender.stringColumn("interlock", item.interlock);
await this.sender.at(moment(timestamp).valueOf(), "ms");
}
// Flush after the sender buffer is filled with all the items from the report
try {
await this.sender.flush();
this.logger.log("log", "BITE report logged to QuestDB.");
} catch (error) {
this.logger.log(
"error",
"There was an error logging a BITE report to the database:\n" + error
);
}
}
}
}
While this is running, the server basically becomes unusable during the inserts which are constant. We have no periods where there isn’t data coming in as this data is sampled at regular intervals.
Virtual Test Environment
OS: Ubuntu Server 24.04
CPU: 12-core
Memory: 32GB
Disk: QCOW2 on ZFS with Optane-backed ZFS intent log (SYNC ON) over 10Gbit/s on NFS.
Questions
-
Is our table-per-metric approach causing excessive CPU usage? Is a single-table approach more-so the expected approach with QuestDB?
-
Are the
WAL
andDEDUP UPSERT KEYS
options particularly CPU-intensive? -
Are there server configuration parameters that we should be adjusting to reduce CPU usage?
-
Is there a more efficient way to structure our insertion code for a table-per-metric approach?
-
Are there any known issues or best practices when migrating from InfluxDB to QuestDB that we should be aware of?
Any guidance would be tremendously appreciated. We’ve invested significant development in our software for QuestDB as a potential time-series database solution, and we were hoping to commit to making it work efficiently in our environment as other offerings just don’t align with our use case as well.