Stardog on VM Linux Ubutu - memory capacity


We are experiencing performance problems with Stardog requests (about 500 000ms minimum to get an answer). We followed the Debian Based Systems installation described in the Stardog documentation and have a stardog service installed in our Ubutu VM.

  • Azure machine: Standard D4s v3 (4 virtual processors, 16 Gb memory)
  • Total amount of memory of the VM = 16 Gio of memory

We tested several JVM environment variables
Xms4g -Xmx4g -XX:MaxDirectMemorySize=8g
Xms8g -Xmx8g -XX:MaxDirectMemorySize=8g

We also tried to upgrade the VM with a machine but without success:
Azure: Standard D8s v3 - 8 virtual processors, 32 Gb memory

By doing the command: systemctl status stardog in the machine with 32Gio memory
we get :

Knowing that there is only stardog server installed in this VM, 8G JVM Heap Memory & 20G Direct Memory for Java, is it normal to have 1.9G in memory (No process in progress)
and 4.1G (when the query is in progress)

"databases.xxxx.queries.latency": {
"count": 7,
"max": 471.44218324400003,
"mean": 0.049260736982859085,
"min": 0.031328932000000004,
"p50": 0.048930366,
"p75": 0.048930366,
"p95": 0.048930366,
"p98": 0.048930366,
"p99": 0.048930366,
"p999": 0.048930366,
"stddev": 0.3961819852037625,
"m15_rate": 0.0016325388459502614,
"m1_rate": 0.0000015369791915358426,
"m5_rate": 0.0006317127755974434,
"mean_rate": 0.0032760240366080024,
"duration_units": "seconds",
"rate_units": "calls/second"

Is it necessary to allocate the memory to the stardog.service in another way ?
Thank you for your help

here the results of cmd : stardog-admin server status

Access Log Enabled : true
Access Log Type : text
Audit Log Enabled : true
Audit Log Type : text
Backup Storage Directory : .backup
CPU Load : 1.88 %
Connection Timeout : 10m
Export Storage Directory : .exports
Memory Heap : 305M (Max: 8.0G)
Memory Mode : DEFAULT{Starrocks.block_cache=20, Starrocks.dict_block_cache=10, Native.starrocks=70, Heap.dict_value=50, Starrocks.txn_block_cache=5, Heap.dict_index=50, Starrocks.untracked_memory=20, Starrocks.memtable=40, Starrocks.buffer_pool=5, Native.query=30}
Memory Query Blocks : 0B (Max: 5.7G)
Memory RSS : 4.3G
Named Graph Security : false
Platform Arch : amd64
Platform OS : Linux 5.15.0-1031-azure, Java 1.8.0_352
Query All Graphs : false
Query Timeout : 1h
Security Disabled : false
Stardog Home : /var/opt/stardog
Stardog Version : 8.1.1
Strict Parsing : true
Uptime : 2 hours 18 minutes 51 seconds

Here after profile the slowest query


Profiling results:
Query executed in 430029 ms and returned 17334 result(s)
Total used memory: 9.4M
Pre-execution time: 16 ms (0.0%)
Post-processing time: 13 ms (0.0%)

Thank you


according to the provided profile, the high query execution time is not related to the memory setting. Instead, the query plan optimizer seems to select a sub-optimal operator to retrieve data from the other database (db://johndoe_DICO). You can see in the profile that the ServiceJoin operator takes the largest portion in the total execution time:

ServiceJoin [#5.0K], results: 101K, wall time: 428910 ms (99.7%)

This can occur due to stale statistics or to cardinality estimation errors. Therefore, as a first step, I suggest to run db optimize to update the statitsics. If this does not help, you can use the following query hint to force the optimizer not to use a ServiceJoin:

#pragma join.service off

According to your profile, you seem to be using an older version of Stardog. Updating the version can also be helpful as there have recently been updates that should improve the query plans for this type of query (e.g., #PLAT-2650).

Best regards


1 Like

Hi Lars,

My version of Stardog Server : 8.1.1
I optimized the two bases used by the request then I put the #pragma join.service off in the request and launch a profiler, very fast results on profiler but not when i launch the query.

Time Profiler results : 678 ms
Time Query results : 468 737 ms

Profiling results:
Query executed in 678 ms and returned 17334 result(s)
Total used memory: 11M
Pre-execution time: 12 ms (1.8%)
Post-processing time: 12 ms (1.8%)

Thank you


the profiling results that you shared look good. However, it is unexpected that running the query (not profiling) still yields the high execution times. To rule out that this is caused by the query plan cache, you can either take the DBs offline and then online again. This will clear the plan cache. (Alternatively, you can add #pragma plan.cache off to make sure that no cached plans are reused)

Regardless, I suggest updating to the latest release (v8.2.1). As previously mentioned, the newer version includes improvements when querying federations (e.g., other databases). This should alleviate the problems without requiring the query hint.

Best regards

1 Like

Its working thank you.
I will upgrade the stardog server version in 8.2.1.

Best regards.

1 Like