Timeout on export query

Hello,

I have a program on which I need to make changes and so on each version I generates the results and put them on QuestDB to compare them in order to check that there is no regression. The problem is that each run generates several milions of lines and when I try to compare the results across multiples runs I get a timeout. I cant put the exact query but it’s something looking like

With
T1 as (select … from table where Run=n)
T2 as (select … from table where Run=n+1)
Select * from t1
Join t2 on t1.timestamp=t2.timestamp where…

As I’m too familiar with SQL this might be the issue. Also I saw on the website that you can specify the timeout for import but is there a way to do it for exporting ? In my case I cant just download the data in csv and compare them.

Hey, can you run your query with EXPLAIN and share the plan?

You can increase the query timeout using query.timeout.sec setting in server.conf. It defaults to 60 seconds.

Hi thanks for the timeout issue, now it’s working well

No problem. If you need more help, let us know.

We may be able to help improve your data layout or queries :slight_smile:

Hi,
It was working well this morning but now the queries give me sun.misc.Unsafe.reallocateMemory() OutOfMemorryError [RSS_MEM_USED=3581902412, oldSize=2147483648, newSize=4294967296, memoryTag=29], original message: unable to allocate 4294967296 bytes.

The same queries but the table got a bit bigger (because there are more runs but no more elements per run), is there a way to fix this ?

What does your deployment look like? Did you follow this?

If you could share your query and schema too, that’d be good. It may be that you have too many groups/too large a join.