I am using python. I want to bulk load data into stardog. Could you advise how to do it? I tried sparqlstore but it’s discontinued and I failed to use it with basic authentication. I am able to use SPARQLWrapper with basic authentication and SELECT data, but couldn’t figure out how to insert data into stardog.
If you can send me some sample code of bulk load or give me some directions, that’ll be great. Thanks!
With below, graph is updated(one record inserted into graph), but it’s not actually loaded into stardog. My question is how can I load data into stardog database? I am trying to make insert work firstly before trouble shooting LOAD…
g=rdflib.Graph()
g.load('http://localhost:5820/VirtualTesting')
print("graph has %s statements." % len(g))
qres = g.update(
"""
PREFIX core: <http://ontologies.com/core#>
INSERT DATA {
core:Account2 core:semanticMatch core:_datasetElement_900005 }
""")
print("graph has %s statements." % len(g))
I’m not too familiar with RDFlib but I dont think the g.load() is doin what you think it is. I think load is going to try to load what is returned from the url into the in memory store. (I can’t remember if that url would return the sprawl service description. If it does it might load that. I’m not sure if that’s what you were intending on doing) I’m not sure why the insert isn’t loading although the object url starting with an underscore is a little strange. I’d have to check if that’s valid. ( I think it’s ok in til but might be problematic in rdf/xml)
I have already tried sparqlwrapper. However, it only supports SELECT (works very well), it doesn’t supports UPDATE or INSERT. The only workaround is to use stardog HTTP api, and I am able to insert one triple into database with one api call.
Please let me know if there is any other way with Python to do bulk load. I am looking for a way to bulk load 2000 triples into the stardog database. Or I can create a file to store all the triples, but how to load the file with python or is there a HTTP api to load file?
Thank you for the link. I am able to either use SPARQLWrapper or HTTP execute update query to do the insert. Both methods are sending one INSERT command for each triple. Please correct me if I am wrong.
What's the recommended way (more efficient) to load large triples? Instead of sending INSERT for 2000 times, is there a way to INSERT or LOAD all triples at one time?
Or as @stephen suggested you can use a transaction although I don’t think that would save you all the network roundtrips if you were using the rest api. I think it would if you were using the Java api.
If you have a large dataset, it’s better to load it first before using your python code to query.
So, you can use the command like this one: nohup ./stardog-admin db create -n <myDB> </reop/of/my/large/dataset .
Before you create a stardog.properties file with at least this information: memory.mode=write_optimized
I used this method to create a DB and load almost 1 billion of triples in less in 3 hours.
HTH
Thanks for your suggestion!! One more question, stardog.properties doesn’t exist now. Is it correct to create stardog.properties under stardog-5.3.3/bin, and only add one line (below) in stardog.properties? Do I need to add anything else in the file?
Have you considered writing the data to a file and using db create as Ghislain suggests? data add will work equally well for existing databases.
Your stardog.properties file should be in your Stardog home directory; the same place as your data files.
Unless you’re very sensitive to load performance, I wouldn’t worry too much. 3 million triples is not very large and should load fast enough under any memory configuration.