Ingestion slowness with Tick Data

I have 1 day worth of tick data/NBBO for ~2000 securities. I am using ILP with TCP to ingest data to questdb.
– Using out of the box config (modified - line.tcp.writer.worker.count: 60).
– Below is my create table:
CREATE TABLE IF NOT EXISTS nbbo_tick_data(Symbol VARCHAR, Date VARCHAR,
Time VARCHAR, Bid_Price DOUBLE,
Bid_Exchange VARCHAR, Bid_Size DOUBLE, Ask_Price DOUBLE,
Ask_Exchange VARCHAR, Ask_Size DOUBLE, timestamp
timestamp) TIMESTAMP(timestamp) PARTITION BY HOUR
Observations:
Questdb runs on 64vCPU/100GB RAM, tried both 500GB gp3 and NVMe disks
Data: ~68MM rows ie 5GB of data.
Python client takes ~70s to send data over the wire to questdb server. Client distributes ingestion (ILP over TCP) across 64 cores. Total 64 TCP socket connections to the server.
(tcp snd/rcv buf seem reasonable)

Issues:
Questdb server ingestion takes ~10minutes. Way too slow.

Are we missing something in terms of server config or our host/OS configuration ?

Hi,

Can you provide an example of how you’re ingesting?
Is this row-by-row or are you using a dataframe?

The latter is significantly faster.

What is your round-trip time latency to the box holding the database?
What is your available bandwidth (as might be reported by an iperf3 test)?

Unrelated to performance, but you might also want to consider moving to HTTP as it provides better features, such as error reporting.

Adam