You can drop the index using this:
ALTER TABLE pepperstone_live ALTER COLUMN symbol DROP INDEX
If you feel the need to add it back later, you can run a similar query:
ALTER TABLE pepperstone_live ALTER COLUMN symbol ADD INDEX
How many rows are present with that filter? i.e run a SELECT count FROM pepperstone_live
and add in your WHERE
filter.
If its more than a handful of rows, then removing the index should help.
Once removed, you can check the query plan against using EXPLAIN
. You should see some nodes with Async
appear i.e Async Group By
or Async Filter
. These nodes will run in parallel. As your table grows in size and queries slow a little, you can keep speeding them up with more cores/RAM.
If you want to run some test benchmarks on similar data, you can use the trades
table on demo. This has over 1.2 billion rows of data, over about 1000 days. It consists of crypto trading data, which isn’t quite the same as your order book data, but not a million miles away:
CREATE TABLE 'trades' (
symbol SYMBOL,
side SYMBOL,
price DOUBLE,
amount DOUBLE,
timestamp TIMESTAMP
) timestamp (timestamp) PARTITION BY DAY WAL;
I don’t recall the exact setup, but demo runs somewhere in the range of 24-48 cores.
Fwiw, we have many users with order book data, often in the billions of rows per table. But whether that makes sense for you depends on several factors. Sometimes, smaller tables with a subset of symbols that you often query together can make sense, especially if you only query a few tables at a time (more RAM to go around!)