Out-Of-Memory exception while uploading not large triple files (100mb)

At the moment we have to upload several files to a stardog repository. But since a few days we have a problem which has not occurred before.
In the beginning we were able to send files to stardog from several 100 mbs. However, something we tried even larger files 1gb. But due to connection and file managing of stardog this uploads last forever, so we were able to stop the upload and try to do it in smaller files. However, since a few days we get: com.complexible.common.protocols.server.rpc.ServerHandler:exceptionCaught(413): exceptionCaughtServerHandler
java.lang.OutOfMemoryError: Direct buffer memory.

I do understand that large files will drain memory, but before it did not throw a exception. One of the differences I found with earlier experiences is that we work in a very large repository at this moment (48 million triples). Would it be possible that the memory exception is thrown due to the size of the repository in combination with the size of the upload file?
Second question, on the machine we have 4gb memory available. Is there a way to scale up the memory size of stardog?

Thanks in advance,

Nicky

What version of Stardog are you using?

You can adjust the amount of memory allocated to Stardog by setting the environment variable STARDOG_JAVA_ARGS. The default is the following

STARDOG_JAVA_ARGS="-Xmx2g -Xms2g -XX:MaxDirectMemorySize=1g"

Stardog memory recommendations are located here in the documentation.

We are running 4.2 with a enterprise license.

We have a dedicated VPS with 4GB Memory, what would be the ideal configuration?

And more importantly how can we prevent this hard error, the only option to recover now is a server restart

I’m not a Stardog person, just a rabid Stardog fan, so unfortunately I don’t know enough about the internals to say if your database size would effect memory usage when loading data. ( I seem to recall Evren responding to some feedback I had on data loading performance regarding that. I’ll try to find it). 4Gb isn’t much but according to the docs you should be good up to about 100M triples.

There are a bunch of changes in 5.0 regarding memory management that should avoid OOM problems like the one you’re having although I realize that upgrading may not be an option.

How are you loading the data? Java API (Snarl), WebConsole, CLI? There is a possibility that if you’re using Snarl that you’re not freeing resources but I think that would cause problems with heap space but it looks like you’re having problems with offheap.

Are you seeing the problems consistently or are they only intermittent? Have you tried restarting Stardog? If so and it seems to fix the problem does the problem reoccur after some time?

…It looks like the only thing mentioned previously is that load times are highly dependent on offheap memory.

The changes in release 5.0 seem to solve the issues we encountered. We did some testing and stardog does not run into a out-of-memory exception.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.