Scaling up Stardog

Hi all, what would be the ideal setup (hardware + topology) for scaling up stardog with 1, 10, 100 billion triples ?

Is there a table with the minimum configuration setup for each case?

I do not plan to use text or spatial search on stardog, only use stardog for pure RDF storage.

I would appreciate whether someone can comment on the topic.

Best,

The section on capacity planning in the docs gives a broad outline

The main driving resource would probably be memory. The rest is important as well but by the time you get a machine with 256gb of ram you're probably going to have CPU and storage covered.

As far as topology is concerned, the clustering is HA so I'm guessing you're going to want as fast an interconnect as possible.

Just a quick point of clarification: You say you aren’t using “text or spatial search.” Does that mean you don’t plan to involve spatial at all?

To go more broadly, when you say “pure RDF storage,” do you plan to use reasoning?

We do not plan to use spatial or text search because we prefer to use it outside from stardog. We need tuning that stardog does not provide at the moment or are not clear how to do so.

Our experience with stardog and spatial search was not good, so we are doing it outside stardog.

We do not use inference either.

Have you tried the spatial features of Stardog 5? We made some pretty substantial improvements (especially with respect to speed) over 4.x. In either case, what did you find it lacking that would have improved your experience?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.