Query data delay

Hi,
I recently use the questdb to store trades and candlestick data, the “candlestick” table works well, However, the data inserted into the “trades” table cannot be queried immediately. seems it will delay for few hours…


As you can see, current time is 2025-06-17T03:09:32 UTC, the “candlesticks” table shows correct. but the latest data time from “trades” table is 2025-06-16T19:56:10.000000Z. Its delay for 8 hours and more

I am very certain that the latest data has been inserted into the"trades"table. because when I refresh “trades” table after few seconds, it will update latest data time and numRows.

Please help me figure out what the issue is.


The way I insert data into the “trades” table and the “candlestick” table is exactly the same. The only difference is that the “trades” table has more fields and a larger size of data.

below are “trades” and “candlestick” table SQL:

CREATE TABLE IF NOT EXISTS `trades`
(
    `createdAt`         TIMESTAMP             NOT NULL,
    `hash`              SYMBOL NOCACHE,
    `index`             SHORT                 NOT NULL,
    `blockId`           LONG                  NOT NULL,
    `blockTime`         TIMESTAMP             NOT NULL,
    `trader`            SYMBOL NOCACHE,
    `tradeType`         SYMBOL CAPACITY 32,
    `quote`             SYMBOL NOCACHE,
    `quoteAmount`       DOUBLE                NOT NULL,
    `quotePrice`        DOUBLE                NOT NULL,
    `base`              SYMBOL NOCACHE,
    `baseAmount`        DOUBLE                NOT NULL,
    `basePrice`         DOUBLE                NOT NULL,
    `liq`               DOUBLE                NOT NULL,
    `vol`               DOUBLE                NOT NULL,
    `fee`               VARCHAR               NOT NULL
) TIMESTAMP(blockTime)
PARTITION BY DAY
TTL 30D
DEDUP UPSERT KEYS(blockTime,hash,index);

CREATE TABLE IF NOT EXISTS `candlesticks`
(
    `interval`      SYMBOL CAPACITY 32,
    `coinId`        SYMBOL NOCACHE,
    `createdAt`     TIMESTAMP               NOT NULL,
    `updatedAt`     TIMESTAMP               NOT NULL,
    `time`          TIMESTAMP               NOT NULL,
    `open`          DOUBLE                  NOT NULL,
    `high`          DOUBLE                  NOT NULL,
    `low`           DOUBLE                  NOT NULL,
    `close`         DOUBLE                  NOT NULL,
    `vol`           DOUBLE                  NOT NULL,
    `liq`           DOUBLE                  NOT NULL
) TIMESTAMP(time)
PARTITION BY HOUR
TTL 1D
DEDUP UPSERT KEYS(time,coinId,interval);

and then I use golang client with ILP protocol to insert data

Insert to “trades” table:

func (s *TradeTable) BatchInsert(ctx context.Context, trades []*Trade) error {
	if len(trades) == 0 {
		return nil
	}
	tableName := s.getTableName()
	sender, err := s.ilp.Sender(ctx)
	if err != nil {
		return err
	}
	for _, trade := range trades {
		err := sender.Table(tableName).
			Symbol("hash", trade.Hash).
			Symbol("tradeType", trade.TradeType).
			Symbol("trader", trade.Signer).
			Symbol("quote", trade.Quote).
			Symbol("base", trade.Base).
			TimestampColumn("createdAt", trade.CreatedAt).
			Int64Column("index", int64(trade.Index)).
			Int64Column("blockId", int64(trade.BlockId)).
			TimestampColumn("blockTime", trade.BlockTime).
			Float64Column("nativePrice", trade.NativePrice.InexactFloat64()).
			Float64Column("quoteAmount", trade.QuoteAmount.InexactFloat64()).
			Float64Column("quotePrice", trade.QuotePrice.InexactFloat64()).
			Float64Column("baseAmount", trade.BaseAmount.InexactFloat64()).
			Float64Column("basePrice", trade.BasePrice.InexactFloat64()).
			Float64Column("liq", trade.Liq.InexactFloat64()).
			Float64Column("vol", trade.Vol.InexactFloat64()).
			StringColumn("fee", feeJson).
			At(ctx, trade.BlockTime)
		if err != nil {
			logger.Warnf("insert trade to db failed: %v", err)
		}
	}
	if err := sender.Flush(ctx); err != nil {
		logger.Warnf("flush trades to db failed: %v", err)
	}
	return sender.Close(ctx)
}

Insert to “candlestick” table:

func (s *CandleTable) BatchInsert(ctx context.Context, candlesticks []*Candlestick) error {
	sender, err := s.ilp.Sender(ctx)
	if err != nil {
		return err
	}

	tableName := s.getTableName(string(interval))
	for _, candle := range candlesticks {
		err := sender.Table(tableName).
			Symbol("interval", candle.Interval).
			Symbol("coinId", candle.CoinId).
			TimestampColumn("createdAt", candle.CreatedAt).
			TimestampColumn("updatedAt", candle.UpdatedAt).
			TimestampColumn("time", candle.Time).
			Float64Column("open", candle.Open.InexactFloat64()).
			Float64Column("high", candle.High.InexactFloat64()).
			Float64Column("low", candle.Low.InexactFloat64()).
			Float64Column("close", candle.Close.InexactFloat64()).
			Float64Column("vol", candle.Vol.InexactFloat64()).
			Float64Column("liq", candle.Liq.InexactFloat64()).
			At(ctx, candle.Time)
		if err != nil {
			logger.Warnf("insert candle to questdb failed: %v", err)
		}
	}

	if err := sender.Flush(ctx); err != nil {
		logger.Warnf("flush candles to questdb failed: %v", err)
	}
	return sender.Close(ctx)
}

Hi @Oldman ,

Please can you check wal_tables()? My guess is that you have a lot of transactions backed up.

Please can you also share the metrics charts for those tables?

You can quickly visualise key information about number of transactions, size of transactions, write amplification etc.

Slow ingestion is usually caused by:

  • Sending data out of order
  • Small transaction sizes

If the delay is as large as a few hours, most likely the issue is both of the above. The idealy ingestion path is larger batch sizes, and each send being chronologically ordered when compared to the last (always going forward!)

I appreciate that you use a loop with a batch of trades. Sometimes only a handful of trades are sent in each triggering event, so it looks batched but in practice it is smaller than expected.

One more thing - have you sized your symbols appropriately? I see this:

trader SYMBOL NOCACHE,

NOCACHE would indicate you expect very high cardinality. In that case, you should set the symbol capacity too. The symbol capacity should be set similar to the number of distinct values you expect to be in the column.

If that is 10,000 traders, then you should try something like 16384, for example.

Undersizing this dramatically is another source of ingestion slowdown (and high CPU usage!)

Hi @nwoolmer Thank you for your reply.

After posting this thread, I checked the document again and learned about Symbol. Then I tried to set the capacity of the Symbol type for the trade table and re-run it.

So far, it has run for several hours and everything is fine. As you said, the problem should be here!

Let it run for a while and see. If there are any problems, I will post the Metrics and ask you again.

Thank you again!