Query endpoint optimisation


I have currently a setup where I am executing a large amount of queries very fast (aiming 2000 queries/second) :
The query looks likes this:

SELECT ?p ?o ?dt WHERE {
BIND( IRI("'+ind+'") AS ?s ) ?s ?p ?o.
BIND (datatype(?o) AS ?dt)

After a while (when 25% of the total amount of ind's are finished), the query engine gets stuck.
In the logs, I see the warning messages that I'm using 100% of the memory...

I have increased both the max memory and heap-size parameters, but the result remains the same (stuck at 25%).

Are there any other parameters which can be set to increase the performance of SELECT queries?

Hi Bram,

How exactly does the message look? When a stall like that appears, can you do 2 things:

  • collect a thread dump (eg. using jstack tool) of the server process
  • collect the server metrics using stardog-admin server metrics and share with us?

Re: optimisation, we have one open ticket to improve performance of simple queries like that (essentially by skipping most optimisation work since it's unnecessary). However, I don't think it'd help much if the issue is memory. What would help most, I believe, is batching the requests using queries like

SELECT ?s ?p ?o ?dt WHERE {
VALUES ?s { :ind1 :ind2 ... :indn }
?s ?p ?o.
BIND (datatype(?o) AS ?dt)

That normally results in much higher throughput regardless of memory issues.


Hi Pavel,

Thanks for this quick response:
Attached the metrics file
metrics.log (20.9 KB)

Regarding the query optimisation, it is an iterative process, so it would require that all those "ind's" are known upfront (which is not always the case).

Kind regards,

Hi again,

If found the problem of the above described behaviour, and it has nothing to do with Stardog.
The program which iterates over the different ind's had a memory leak.
And (bad practices) I was running the Stardog database on the same cluster as this program.

So the total memory was getting critical (which resulted in Stardog providing such logs).

Fixed the leak, and ran Stardog on a different node solved the problem.

Kind regards,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.