Enabling jit on sql queries

Hi,

How can I ensure JIT is enabled on the SQL queries? I have set the server.conf property but still find that its not working as intended. Also, I see that jit flag is set to false for all queries that I am sending from my python code and the Web QuestDB tool.

i.q.g.e.QueryProgress exe [id=112, sql=select version(), principal=admin, cache=false, jit=false]

Is there any additional parameter that I need to set?

Thanks

Not all queries will use JIT, this is an optimisation specific to certain queries only!

Also, we only JIT compile for x86 architectures, not for ARM, so you won’t see this on Apple Silicon.

We are running the DB on x86 machine only. Also, I do not find this being set for any of the 1000’s of queries that we run which have where clauses. I have even enabled the debug option.
Sample output from logs:

2025-01-28T14:43:15.651167Z I i.q.g.e.QueryProgress exe [id=1991, sql=`select count() from quote where trade_date='2025-01-27'`, principal=admin, cache=false, jit=false]
2025-01-28T14:43:15.651203Z I i.q.c.TableReader open partition [path=/quote~57/2025-01-27.3944, rowCount=1162663, partitionIndex=83, partitionCount=86]
2025-01-28T14:43:15.652176Z I i.q.g.e.QueryProgress fin [id=1991, sql=`select count() from quote where trade_date='2025-01-27'`, principal=admin, cache=false, jit=false, time=1014764]
2025-01-28T14:43:15.652181Z I i.q.c.h.p.JsonQueryProcessorState [7082401209047] timings [compiler: 5790695, count: 0, execute: 6832354, q=`select count() from quote where trade_date='2025-01-27'`]

BTW, is there any documentation on how I can read the logs?

thanks.

Try this on demo:

explain
select timestamp, price from trades
where price > 5
Async JIT Filter workers: 46
  filter: 5<price
    PageFrame
        Row forward scan
        Frame forward scan on: trades

This is (for example) a simple filter over a timestamp and double column.

BTW, is there any documentation on how I can read the logs?

Please could you expand on this a bit?

I just tried this and get the following :

explain select trade_date, ticker, close from daily
          where 
            currency = 'USD' and close is not null
            and ticker = 'AAPL'
            and close > 100

SelectedRecord
    Filter filter: (currency=‘USD’ and close is not null and ticker=‘AAPL’ and 100<close)
        PageFrame
            Row forward scan
            Frame forward scan on: daily

I ran this from the questDB webUI and see the following in the logs:

2025-01-29T02:39:17.350349Z I i.q.g.e.QueryProgress fin [id=3278, sql=`explain select trade_date, ticker, close from daily
          where
            currency = 'USD' and close is not null
            and ticker = 'AAPL'
            and close > 100`, principal=admin, cache=false, jit=false, time=17385286]

Can you can see the http message is setting jit to false - does this override the config setting? How do we set the WebUI to pass jit to true?

Regarding logs: I meant how do we interpret the log output as shown above. What does “fin” or “exe” mean? What does time value after the jit flag indicate, etc.

Thanks.

It is false because the query does not compile to a JIT filter - the plan should have a node with JIT in it. How many cores is this running on, as it doesn’t compile to an async (parallel) filter either?

Also, what are the types involved? Most likely, a simple number based filter will be JIT’ed, but not necessarily more complex types that rely on Java-code to execute. As a rough rule, if you could pass the memory to C++ and fully realise the filter, then there is a good chance of compiling to a JIT filter.

Re: logs, roughly exe is the start of query execution and fin is the end. The time parameter is the execution time.

You can also search the github repo at questdb/questdb for logs to see lower-level context and where they originate.

We don’t have detailed docs about interpreting the logs - and many users prefer to ask us ( or our slack chatbot) their questions anyway!

Its running on a 8 core machine. The only types in the table are symbol, timestamp, long and boolean.

BTW, I just updated to 8.2.2 and I see that the jit parameter is getting set to true on the exe log message. But I still not see the JIT being mentioned in the ‘explain’ node. What is the best way to debug this? I think getting JIT working on top of 8.2.2 performance improvements will be even faster.

BTW, I also notice that after upgrading to 8.2.2 the DB freezes for random amounts of time between 25sec to 70 sec when executing the same queries that I used to run with 8.2.1.

… see that the jit parameter is getting set to true on the exe log message. But I still not see the JIT being mentioned in the ‘explain’ node.

Please could you share the log message and EXPLAIN plan again?

BTW, I also notice that after upgrading to 8.2.2 the DB freezes for random amounts of time between 25sec to 70 sec when executing the same queries that I used to run with 8.2.1.

Try enabling query tracing: Query Tracing | QuestDB

If may be that you have a very heavy query - or something that is holding lots of resources.

We made a lot of changes in the last release, including quite a few performance related ones. It may be that your queries are running faster, and using more resources than before - but that is having a knock-on effect.

Hi,

I have experimented with few variations and figured that JIT seems to get enabled when you increase the shared.worker.count > 2. I have currently set it to shared.worker.count=5 and see that the JIT is showing its magic. Following is the explain for the same query as above :

explain select trade_date, ticker, close from daily
          where 
            currency = 'USD' and close is not null
            and ticker = 'AAPL'
            and close > 200

SelectedRecord
    Async JIT Filter workers: 5
      filter: (currency=‘USD’ and close is not null and ticker=‘AAPL’ and 200<close)
        PageFrame
            Row forward scan
            Frame forward scan on: daily

This resolves the JIT suspense.

My biggest challenge now is the random and LONG execution times for the same query (but with different date ranges). I have enabled the Query Tracing as suggested and following is the snapshot. As you can see the execution ranges from 150ms to 23 sec.

I have enabled DEBUG logging to see what might be happening and find the following which is taking up the time. Please notice the ** lines. What could be causing this? Is GC blocking them? I have 64Gb on the box and the process is only consuming 2.4 Gb. I am using the default setting as indicated in the config.

2025-02-12T21:52:30.421710Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.421712Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59823, taskType=1, frameIndex=23, frameCount=58, active=true]
2025-02-12T21:52:30.423908Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=3, frameIndex=21, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:30.423919Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.423921Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=22, frameCount=58, active=true]
2025-02-12T21:52:30.423930Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=3, frameIndex=22, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:30.423932Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.423933Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=23, frameCount=58, active=true]
2025-02-12T21:52:30.856585Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=3, frameIndex=23, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:30.856592Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.856593Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.856595Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.856599Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59817, taskType=1, frameIndex=24, frameCount=58, active=true]
2025-02-12T21:52:30.856601Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59814, taskType=1, frameIndex=24, frameCount=58, active=true]
2025-02-12T21:52:30.856602Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:30.856604Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=24, frameCount=58, active=true]
2025-02-12T21:52:30.858121Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=23, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:30.858134Z D i.q.s.DirectIntList resized [old=0, new=128]


**2025-02-12T21:52:30.858136Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59822, taskType=1, frameIndex=24, frameCount=58, active=true]**
**2025-02-12T21:52:31.432072Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]**


2025-02-12T21:52:31.432075Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:31.432076Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:31.432091Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:31.432092Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:31.432093Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59823, taskType=1, frameIndex=25, frameCount=58, active=true]
2025-02-12T21:52:31.432094Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:31.432094Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59814, taskType=1, frameIndex=25, frameCount=58, active=true]
2025-02-12T21:52:31.432096Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59817, taskType=1, frameIndex=25, frameCount=58, active=true]
2025-02-12T21:52:31.432120Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=3, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:31.432128Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:31.432130Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=25, frameCount=58, active=true]
2025-02-12T21:52:31.446750Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:31.446760Z D i.q.s.DirectIntList resized [old=0, new=128]

**2025-02-12T21:52:31.446768Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59822, taskType=1, frameIndex=25, frameCount=58, active=true]**
**2025-02-12T21:52:32.074379Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=25, frameCount=58, active=true, cursor=9223372036854775807]**


2025-02-12T21:52:32.074386Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.074386Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.074391Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59817, taskType=1, frameIndex=26, frameCount=58, active=true]
2025-02-12T21:52:32.074393Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=26, frameCount=58, active=true]
2025-02-12T21:52:32.074394Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.074396Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59814, taskType=1, frameIndex=26, frameCount=58, active=true]
2025-02-12T21:52:32.074508Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=25, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:32.074520Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.074522Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=0, id=59823, taskType=1, frameIndex=26, frameCount=58, active=true]
2025-02-12T21:52:32.084220Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=25, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:32.084231Z D i.q.s.DirectIntList resized [old=0, new=128]


**2025-02-12T21:52:32.084232Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59822, taskType=1, frameIndex=26, frameCount=58, active=true]
**2025-02-12T21:52:32.648405Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=3, frameIndex=26, frameCount=58, active=true, cursor=9223372036854775807]


2025-02-12T21:52:32.648406Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=26, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:32.648411Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=26, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:32.648411Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=26, frameCount=58, active=true, cursor=9223372036854775807]
2025-02-12T21:52:32.648422Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.648423Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.648424Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.648424Z D i.q.s.DirectIntList resized [old=0, new=128]
2025-02-12T21:52:32.648425Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=3, id=59811, taskType=1, frameIndex=27, frameCount=58, active=true]

In addition, I also find many log statements before this :

2025-02-12T21:52:21.464578Z D i.q.c.v.MemoryCMRImpl open [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/div.d, fd=6167579555785]
2025-02-12T21:52:21.464580Z D i.q.c.v.MemoryCMRImpl map  [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/div.d, fd=6167579555785, pageSize=42908, size=42908]
2025-02-12T21:52:21.464582Z D i.q.c.v.MemoryCMRDetachedImpl closing [fd=6167579555785]
2025-02-12T21:52:21.464586Z D i.q.c.v.MemoryCMRImpl open [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/split.d, fd=6167579555786]
2025-02-12T21:52:21.464589Z D i.q.c.v.MemoryCMRImpl map  [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/split.d, fd=6167579555786, pageSize=42908, size=42908]
2025-02-12T21:52:21.464591Z D i.q.c.v.MemoryCMRDetachedImpl closing [fd=6167579555786]
2025-02-12T21:52:21.464594Z D i.q.c.v.MemoryCMRImpl open [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/prev_date.d, fd=6167579555787]
2025-02-12T21:52:21.464597Z D i.q.c.v.MemoryCMRImpl map  [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/prev_date.d, fd=6167579555787, pageSize=85816, size=85816]
2025-02-12T21:52:21.464599Z D i.q.c.v.MemoryCMRDetachedImpl closing [fd=6167579555787]
2025-02-12T21:52:21.464602Z D i.q.c.v.MemoryCMRImpl open [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/prev_close.d, fd=6167579555788]
2025-02-12T21:52:21.464604Z D i.q.c.v.MemoryCMRImpl map  [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/prev_close.d, fd=6167579555788, pageSize=42908, size=42908]
2025-02-12T21:52:21.464606Z D i.q.c.v.MemoryCMRDetachedImpl closing [fd=6167579555788]
2025-02-12T21:52:21.464609Z D i.q.c.v.MemoryCMRImpl open [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/open.d, fd=6167579555789]
2025-02-12T21:52:21.464612Z D i.q.c.v.MemoryCMRImpl map  [file=/var/lib/questdb/db/daily~36/2025-01-16.4854/open.d, fd=6167579555789, pageSize=42908, size=42908]
2025-02-12T21:52:21.464614Z D i.q.c.v.MemoryCMRDetachedImpl closing [fd=6167579555789]

Thanks.

Re: worker count, we don’t activate some optimisations for <4 cores. This is because running queries in parallel with only 2 cores takes too much away from other processes, like ingestion. As far as i recall, to get a single threaded JIT filter, you have to override a config to disable async filters.

Better to just run on more cores!

Re: time taken, you can estimate the scan length by running a simple select with count, no other group by, with the same filter. Is there any correlation between the amount of data scanned, and the runtime?

These queries are not time bounded or limited, so are full table scans. How much data is present in the table? Are the partitions very small, and numerous?

Thanks for the insight on JIT and cores dependency.

Regarding time, the date range (num days) used for the BETWEEN clause is same. The only difference is the different start and end dates. Also, I cannot reproduce this long execution times consistently. If I run the same query again it runs faster.

To your question of : Is there any correlation between the amount of data scanned, and the runtime? No, I do not find the correlation.

The queries are time bound i.e. I am searching by the partition column (like : where date between ‘d1’ and ‘d2’). So I do not think I am doing full table scans.

The table as about 2.5B rows with each date / partition having 5mm rows. The query looks for a 15 day date range.

BTW, based on the highlighted log statements above - what is it indicating? How do I read this log?

thanks.

This might just be a cold-start issue. The first time you run a query, it will load pages from disk into memory (via mmap) which can take a while. Then in subsequent queries, no disk operations will be required, because the data is already in the page cache.

To test this theory, use the touch function. This takes a SQL query as a parameter.

Simply restart the database, run touch with your slow query, and then your slow query on its own. If it is fast, then most likely this is what you are seeing.

Re: log messages, the highlighted logs are just explaining that files are being opened, and that group by code is running.

Sure will try the touch. But as you can see in the screenshot of the Query Tracing - it does not seem like a cold start issue. The DB and the tables are being actively used (and respond quickly) before the specific run which takes longer.

Also, if you look at the ** highlighted** logs above notice a freeze of ~600ms at PageFrameSequence:

2025-02-12T21:52:30.858136Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59822, taskType=1, frameIndex=24, frameCount=58, active=true]
2025-02-12T21:52:31.432072Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=0, frameIndex=24, frameCount=58, active=true, cursor=9223372036854775807]

........

2025-02-12T21:52:31.446768Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=59822, taskType=1, frameIndex=25, frameCount=58, active=true]
2025-02-12T21:52:32.074379Z D i.q.g.e.t.AsyncGroupByRecordCursor collected [shard=1, frameIndex=25, frameCount=58, active=true, cursor=9223372036854775807]```

BTW, reading the touch doc - notice “…or any condition which caused disk cache to flush” . What could cause the disk cache to flush? I think this is getting triggered. How do I increase the parameters which control this?

Thanks.

Once you reach the limit of available RAM, as you request more pages, dirty pages will be flushed to disk and evicted from the cache, and replaced with new ones.

What disk are you running on? Check your metrics, are you anywhere near hitting IOPS/throughput limits?

It is easy to crush your throughput if you are ingesting O3, as this is an expensive copy-on-write process.

I have tinkered with the ulimit, fs.file-max, vm.mx_map_count parameters but no effect as the limits are much higher. Also, when I do lsof on the process notice that it is in the range of 2000 which is not much. Is there any limitation on instantiating the db using docker image? The machine has about 64gb and we are using SSD disks.

If you run iostat, what numbers do you get? Are you hitting the limits of your disks?

The iostat looks fine with utilization of 4.3%.

Linux 4.18.0-553.36.1.el8_10.x86_64 (phost)         02/24/2025      _x86_64_        (8 CPU)

     r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util Device
    0.60    1.81     30.2k     79.5k     0.00     0.11   0.1%   5.8%    1.52    1.32   0.00    50.0k    43.8k   0.30   0.1% nvme0n1
   29.55    1.87      5.5M    152.5k     0.00     0.29   0.0%  13.5%   59.60   10.08   1.78   190.1k    81.4k   1.36   4.3% nvme1n1

the logs have the following - please notice the time stamp jumps (***) –

*** 2025-02-24T19:05:57.808706Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=55, frameCount=131, active=true, cursor=2438011]
*** 2025-02-24T19:05:58.575118Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=49, frameCount=131, frameId=549197, active=true, cursor=2438005]

2025-02-24T19:05:58.575159Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=50, frameCount=131, frameId=549197, active=true, cursor=2438006]
2025-02-24T19:05:58.575186Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=65, frameCount=131, cursor=2438021]
2025-02-24T19:05:58.575188Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=66, frameCount=131, cursor=2438022]

*** 2025-02-24T19:05:58.575188Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=56, frameCount=131, active=true, cursor=2438012]
*** 2025-02-24T19:05:59.652755Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=1, frameIndex=83, frameCount=127, frameId=549204, active=true, cursor=9223372036854775807]

2025-02-24T19:05:59.652795Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=549204, taskType=0, frameIndex=84, frameCount=127, active=true]
2025-02-24T19:05:59.736381Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=57, frameCount=131, active=true, cursor=2438013]
2025-02-24T19:05:59.948926Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=58, frameCount=131, active=true, cursor=2438014]

***  2025-02-24T19:05:59.948941Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=59, frameCount=131, active=true, cursor=2438015]
*** 2025-02-24T19:06:00.571364Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=60, frameCount=131, active=true, cursor=2438016]
***  2025-02-24T19:06:01.770218Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=51, frameCount=131, frameId=549197, active=true, cursor=2438007]

2025-02-24T19:06:01.770255Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=52, frameCount=131, frameId=549197, active=true, cursor=2438008]
2025-02-24T19:06:01.770256Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=53, frameCount=131, frameId=549197, active=true, cursor=2438009]
2025-02-24T19:06:01.770281Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=54, frameCount=131, frameId=549197, active=true, cursor=2438010]
2025-02-24T19:06:01.770306Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=55, frameCount=131, frameId=549197, active=true, cursor=2438011]
2025-02-24T19:06:01.770341Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=56, frameCount=131, frameId=549197, active=true, cursor=2438012]
2025-02-24T19:06:01.770361Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=67, frameCount=131, cursor=2438023]
2025-02-24T19:06:01.770363Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=68, frameCount=131, cursor=2438024]
2025-02-24T19:06:01.770364Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=69, frameCount=131, cursor=2438025]
2025-02-24T19:06:01.770365Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=70, frameCount=131, cursor=2438026]
2025-02-24T19:06:01.770366Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=71, frameCount=131, cursor=2438027]
2025-02-24T19:06:01.770367Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=72, frameCount=131, cursor=2438028]
2025-02-24T19:06:01.770368Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=61, frameCount=131, active=true, cursor=2438017]
2025-02-24T19:06:02.189180Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=62, frameCount=131, active=true, cursor=2438018]

**** 2025-02-24T19:06:02.189198Z D i.q.s.DirectLongList resized [old=2048, new=7350552]
**** 2025-02-24T19:06:04.060287Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=63, frameCount=131, active=true, cursor=2438019]

2025-02-24T19:06:04.060307Z D i.q.s.DirectLongList resized [old=7337216, new=7350552]
2025-02-24T19:06:04.148093Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=1, frameIndex=84, frameCount=127, frameId=549204, active=true, cursor=9223372036854775807]
2025-02-24T19:06:04.148137Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=549204, taskType=0, frameIndex=85, frameCount=127, active=true]
2025-02-24T19:06:04.148148Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=1, frameIndex=85, frameCount=127, frameId=549204, active=true, cursor=9223372036854775807]
2025-02-24T19:06:04.148150Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=549204, taskType=0, frameIndex=86, frameCount=127, active=true]
2025-02-24T19:06:04.148155Z D i.q.s.DirectLongList resized [old=7290416, new=7298224]
2025-02-24T19:06:04.503145Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=64, frameCount=131, active=true, cursor=2438020]
2025-02-24T19:06:04.503163Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=65, frameCount=131, active=true, cursor=2438021]
2025-02-24T19:06:05.646815Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=57, frameCount=131, frameId=549197, active=true, cursor=2438013]
2025-02-24T19:06:05.646856Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=58, frameCount=131, frameId=549197, active=true, cursor=2438014]
2025-02-24T19:06:05.646857Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=59, frameCount=131, frameId=549197, active=true, cursor=2438015]
2025-02-24T19:06:05.646894Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=60, frameCount=131, frameId=549197, active=true, cursor=2438016]
2025-02-24T19:06:05.646923Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=61, frameCount=131, frameId=549197, active=true, cursor=2438017]
2025-02-24T19:06:05.646945Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=73, frameCount=131, cursor=2438029]
2025-02-24T19:06:05.646947Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=74, frameCount=131, cursor=2438030]
2025-02-24T19:06:05.646948Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=75, frameCount=131, cursor=2438031]
2025-02-24T19:06:05.646949Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=76, frameCount=131, cursor=2438032]
2025-02-24T19:06:05.646950Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=77, frameCount=131, cursor=2438033]
2025-02-24T19:06:05.646951Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=66, frameCount=131, active=true, cursor=2438022]
2025-02-24T19:06:06.844159Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=67, frameCount=131, active=true, cursor=2438023]
2025-02-24T19:06:06.883914Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=68, frameCount=131, active=true, cursor=2438024]
2025-02-24T19:06:08.018270Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=1, frameIndex=86, frameCount=127, frameId=549204, active=true, cursor=9223372036854775807]
2025-02-24T19:06:08.018312Z D i.q.c.s.a.PageFrameSequence reducing locally [shard=1, id=549204, taskType=0, frameIndex=87, frameCount=127, active=true]
2025-02-24T19:06:08.820241Z D i.q.g.e.t.AsyncFilteredRecordCursor collected [shard=3, frameIndex=62, frameCount=131, frameId=549197, active=true, cursor=2438018]
2025-02-24T19:06:08.820281Z D i.q.c.s.a.PageFrameSequence dispatched [shard=3, id=549197, frameIndex=78, frameCount=131, cursor=2438034]
2025-02-24T19:06:08.820283Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=69, frameCount=131, active=true, cursor=2438025]
2025-02-24T19:06:09.175791Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=70, frameCount=131, active=true, cursor=2438026]
2025-02-24T19:06:09.175807Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=71, frameCount=131, active=true, cursor=2438027]
2025-02-24T19:06:09.175813Z D i.q.s.DirectLongList resized [old=7360112, new=7365304]
2025-02-24T19:06:09.600070Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=72, frameCount=131, active=true, cursor=2438028]
2025-02-24T19:06:10.514732Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=73, frameCount=131, active=true, cursor=2438029]
2025-02-24T19:06:11.295444Z D i.q.c.s.a.PageFrameReduceJob reducing [shard=3, id=549197, taskType=0, frameIndex=74, frameCount=131, active=true, cursor=2438030]

Those are different cursors. Since this is running in parallel, each separate cursor has a number i.e cursor=2438015. I’m not sure looking at the logs that graunlar tells you anything much.

The query is split into frames. In the above snippet, they are 59, 60, 51. These are handled by separate threads, with their own cursors 2438015, 2438016, 2438007.

You would need to compare specific tasks to get a better estimate.

Let me tag @puzpuzpuz at this point!