HTTP Upload of TRiG always goes to default graph

If I have a TRiG file such as <urn:tiny> { <urn:foo> a <urn:bar> } and I upload it using curl, e.g.

curl -H "Content-Type: application/trig" -d @tiny.trig https://my.stardog.server/mydb

the single triple goes into the default graph, in spite of the specified graph name. When I use 'Load Data' from Stardog Studio on the same file, the triple goes into the correct named graph. I know I can override the destination graph by adding a ?graph=... to the upload URI, but I would like it to come from the data. What am I missing?

Rather than use the raw HTTP api, I would recommend using a higher API such as pystardog or stardog.js.

The higher level API as well as Stardog studio use transaction when adding data. It possible via the curl command using the following

❯ curl -X POST http://localhost:5820/myDB/transaction/begin -u admin:admin
71889861-d025-44cc-90d9-75c448924122
❯ curl -H "Content-Type: application/trig" -d @sample.trig http://localhost:5820/myDB/71889861-d025-44cc-90d9-75c448924122/add -u admin:admin
❯ curl -X POST http://localhost:5820/myDB/transaction/commit/71889861-d025-44cc-90d9-75c448924122 -u admin:admin
{
  "added": 4,
  "removed": 0
}

I am trying to keep my app as vendor-independent as possible, a lot of times my software stack is determined by my customers. Therefore would like to stick to SPARQL 1.1 Graph Store HTTP Protocol if at all possible.

Unfortunately " SPARQL 1.1 Graph Store HTTP Protocol" only supports taking action on a single graphs per call, and therefore there is currently no way to load TRIG files without using a vendor specific solution. There is currently a proposed solution at W3C to support TRIG, N-Quad Extend Graph Store HTTP Protocol to support operations on the RDF Dataset · Issue #56 · w3c/sparql-12 · GitHub.

In the meantime, here are three options for you to evaluate

1- Convert your TRIG file into a SPARQL update query INSERT DATA { graph :g1 {...} graph :g2 {...} } The advantage of this is that it will not be vendor specific and should work for every database that supports the SPARQL 1.1 Graph Store HTTP api. The disadvantage is that you would need to pre-process your TRIG file.

2 - Creating a turtle file each graph you want to load. This solution has the advantage and the disadvantage of the previous options. While it may be easier to read a Turtle file than a SPARQL UPDATE, this solution would not be ACID.

3 - Another alternative is that you create a function that uses Stardog proprietary /{db}//add api. By putting all the code related to loading TRIG it would mitigate the work required if you move to another vendor you would only need to replace the code to load to Stardog to use their proprietary api, or at worst a logic similar to options #1 or #2.

There may be other options but out of these 3, we recommended the latter since it mitigates the risk of using proprietary API while balancing code-reusability making maintenance most likely easier in the future.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.