We are looking at using QDB as a part of a replacement of a part of our pipeline that uses Kafka for both streaming and storage, decoupling the storage part and using QDB for that.
What we are interested in at the moment is documentation around how to scale QDB if we grow and if dynamic scaling is supported. We are an Azure shop if that makes a difference.
There are different ways of scaling QuestDB. You can add more CPUs/memory to a single instance and use it for both writes and reads, or you can configure (using QuestDB Enterprise) multiple read-replicas, so you can scale-out reads if needed. Having said so, we see users that process thousands of queries per second on a single instance, so it depends a lot on your use case.
For scaling storage, you can configure QuestDB Enterprise to send older partitions automatically to object storage. Those partitions remain available for querying (with initial increased latency), but they free storage on your local filesystem.