Stardog returns reasoning results even if the database is empty

Hi community.

This problem has been annoying me for a long time.

I use two connections in my Java program:
(1) dataLoadingConnection is used to load the data,
(2) reasoningQueryConnection to execute the query with reasoning.
Both are the instances of com.complexible.stardog.api.Connection

I load data by reading a nquads file line by line, then use dataLoadingConnection to load each line into stardog.
I have a counter that counts up to 100 triples to be loaded, then execute a query with reasoning using reasoningQueryConnection.
Then I need to clean all data from the triple-store, I do so with loadingDataConnection.remove().all().

I am using an SSD, so disk IO is fast.
A running cycle of loading 100 triples, executing the query, and empty the database will only take about 0.5s, so everything happens fast. (yes, this is a stream reasoning prototype).

The problem is, the query result returned is always the same one, which apparently not possible given that the data will be dumped and different data will be loaded. I have set a breakpoint right after the data is dumped (which means the database is now empty), but the same query result is still returned when running the query manually in the web console and using command line stardog query --reasoning db “my query” in the terminal. However, if I manually restart the stardog server, this problem can be solved.

My questions are:
(1) I cannot restart the server manually each time after the data is dumped, as the running cycle happens really fast, and I only use stardog client api. What should I do for this?
(2) is it because the cycle is so fast that something in stardog cannot keep up with it?
(3) is query result/data cached inside stardog so that the same result is returned all the time? Otherwise it is impossible to return the query result after the data is dumped…

Thanks a lot!

Robert

The first thing that comes to my mind is there's a problem with how you're handling transactions. Try taking a look at stardog-examples/ConnectionAPIExample.java at develop · stardog-union/stardog-examples · GitHub

If you're ever only going to have 100 triples you might want to use an in memory db.

Thank you Zach,

I have read the github code snippet. It mentions that “all changes made to database should be within one transaction”. I actually used dataLoadingConnection to add and delete triples (which is changing database contents within one transaction"), the other transaction reasoningQueryConnection is only used to do query, which doesn’t change the database. So I think my usage of transaction is correct. What’s your opinion? Anything that I am not aware of?

The following is my code, if you would like to take a look at my implementation:

  public class SnarlClient {

    private static String serverURL = "http://localhost:5820";  
    private static String dbName = "db";
    private static String password = "admin";
    private static String username = "admin";
    
    private AdminConnection adminConn;
    private Connection dataLoadingConnection;
    private ReasoningConnection reasoningQueryConnection;

    public SnarlClient() {
    		adminConn = AdminConnectionConfiguration.toServer(serverURL).credentials(username, password).connect();		
    		dataLoadingConnection = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).reasoning(true).connect();
    		reasoningQueryConnection = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).reasoning(true).connect().as(ReasoningConnection.class);
    		info("connected to " + serverURL + "/" + dbName);
    		emptyDB();
    		info("database initialized");
    	}
    	
    	public void loadData(String path, RDFFormat f) {
    		if(path != "") {
    			try {
    				this.dataLoadingConnection.begin();
    				this.dataLoadingConnection.add().io().format(f).stream(new FileInputStream(path));				
    			} catch (StardogException e) {
    				err("data load failed: " + path);
    				e.printStackTrace();
    			} catch (FileNotFoundException e) {
    				err("data file not found: " + path);
    				e.printStackTrace();
    			}
    			this.dataLoadingConnection.commit();
    			info("data loaded from " + path);	
    		}
    		else {
    			info("data path is empty");
    		}
    	}
    	
    	// add model to triple-store
    	public void addModel(Model m, String graph_id) {
    		dataLoadingConnection.begin();
    		dataLoadingConnection.add().graph(m, Values.iri(graph_id));
    		dataLoadingConnection.commit();
    	}
    	
    	// query
    	public TupleQueryResult query(String queryString, boolean enableReasoning) {
    		if(enableReasoning) {
    			return this.reasoningQueryConnection.select(queryString).execute();
    		}
    		else {
    			return this.dataLoadingConnection.select(queryString).execute();
    		}
    	}
    	
    	// explain
    	public Iterator<Proof> explain(Statement s) {
    		return this.reasoningQueryConnection.explain(s).computeNamedGraphs().proofs().iterator();
    	}
    	
    	// empty triple-store
    	public void emptyDB() {
    		dataLoadingConnection.begin();
    		dataLoadingConnection.remove().all();
    		dataLoadingConnection.commit();
    		info("database cleaned");
    	}
    }

I don’t see your exact problem jumping out at me but I’ll keep looking until someone else can point out what I’m missing. I will try to add some helpful comments on some things that I do see.

Both your dataLoadingConnection and reasoningQueryConnection have reasoning enabled by calling .reasoning(true). The difference with promoting reasoningQueryConnection is you now have access to reasoning methods like isConsistent() but I believe that both will apply reasoning to any queries that they execute although you’re only executing update queries over the reasoningQueryConnection which don’t have reasoning applied so it should be equivalent to it not being enabled as you’re using it.

A quick question that comes to my mind is what happens when you request .as(ReasoningConnection.class) on a connection without reasoning enabled? I’m guessing it throws an exception, I’m just not sure if that would be when you request the connection promotion or when you go to execute a method on the connection. So you should be able to get away with a single connection and simplify things a bit if you’d like.

Your query method returns a TupleQueryResult. You’ll need to be careful to make sure to close those when you’re done.

Can you share the code that you use to run this?

Hi Zach,

I have updated my SnarlClient code, which is as follows:

public class SnarlClient {

	private static String serverURL = "http://localhost:5820";  
	private static String dbName = "db";
	private static String password = "admin";
	private static String username = "admin";
	

	private ReasoningConnection aReasoningConn;
	private Connection aConn;

	public SnarlClient() {
		aReasoningConn = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).reasoning(true).connect().as(ReasoningConnection.class);
		aConn = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).connect();
		info("connected to " + serverURL + "/" + dbName);
		emptyDB();
		info("database initialized");
	}
	
	public void loadData(String path, RDFFormat f) {
		if(path != "") {
			try {
				this.aConn.begin();
				this.aConn.add().io().format(f).stream(new FileInputStream(path));				
			} catch (StardogException e) {
				err("data load failed: " + path);
				e.printStackTrace();
			} catch (FileNotFoundException e) {
				err("data file not found: " + path);
				e.printStackTrace();
			}
			this.aConn.commit();
			info("data loaded from " + path);	
		}
		else {
			info("data path is empty");
		}
	}

	// add model to triple-store
	public void addModel(Model m, String graph_id) {
		aConn.begin();
		aConn.add().graph(m, Values.iri(graph_id));
		aConn.commit();
	}

	// add models to triple-store
	public void addModels(Queue<TemporalGraph> models) {
		aConn.begin();
		models.forEach((m)->{
			aConn.add().graph(m.getModel(), Values.iri(m.getGraphID()));
		});
		aConn.commit();
	}
	
	// query
	public TupleQueryResult query(String queryString, boolean enableReasoning) {
		if(enableReasoning) {
			return this.aReasoningConn.select(queryString).execute();
		}
		else {
			return this.aConn.select(queryString).execute();
		}
	}
	
	// explain
	public Iterator<Proof> explain(Statement s) {
		return this.aReasoningConn.explain(s).computeNamedGraphs().proofs().iterator();
	}

	// delete graphs in triple-store
	public void deleteGraph(String graphs) {
		// https://groups.google.com/a/clarkparsia.com/forum/#!searchin/stardog/sparql$20drop/stardog/5t8Q63w25w8/iLbQaPByFAAJ
		this.aConn.update("delete { graph ?g { ?s ?p ?o } } where { graph ?g {?s ?p ?o.} values ?g{ " + graphs + " }}").execute();
	}		
	
	// empty triple-store
	public void emptyDB() {
		aConn.begin();
		aConn.remove().all();
		aConn.commit();
		info("database cleaned");
	}
	
	// clear all graphs in the current database
	public void clearAllGraphs() {
		this.aConn.update("delete {graph ?g {?s ?p ?o}} where {graph ?g {?s ?p ?o}}").execute();
		info("all graphs cleared");
	}
	
	// clean up everything
	public void cleanUp() {
		aReasoningConn.close();
		aConn.close();
		info("all connections closed.");
	}
}

I use the following code to run it:

	public void test() {
		String query = "PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> SELECT ?s WHERE { ?s rdf:type ub:Chair . }";
		SnarlClient client = new SnarlClient();
		client.emptyDB();
		client.loadData("./file/univ-bench.owl", RDFFormat.RDFXML);
		int counter = 0;
		try {
			BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(new File("./file/stream.data"))));
			String line = "";
			while((line = br.readLine()) != null) {
				CommandParser parser = new CommandParser(line);
				String s = parser.getOption("s");
				String p = parser.getOption("p");
				String o = parser.getOption("o");
				String g = parser.getOption("g");
				Model dataModel = null;
				if(o.contains("http")) { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.iri(o))); }
				else { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.literal(o))); }
				client.addModel(dataModel, g);
				counter++;
				if(counter == 100) {
					TupleQueryResult r = client.query(query,true);
					while(r.hasNext()) {
						System.out.println("query");
						System.out.println(r.next().toString());
					}
					r.close();
					client.clearAllGraphs();
				}				
			}
			br.close();
		} catch (FileNotFoundException e) {
			System.out.println("cannot read data");
			e.printStackTrace();
		} catch (IOException e) {
			e.printStackTrace();
		}
	}

My data can be downloaded from Dropbox - File Deleted
the stream.data is the streaming data that the program reads, the univ-bench.owl is the modified ontology from the original LUBM ontology.
This code essentially reads every 100 triples from the data, run the query that requires OWL DL reasoning, then dump the database, and repeat.
The problem is, the first result generated with the first 100 triples will be reported over and over again, regardless however many different triples in the database, even though the db is empty...
I am using stardog 4.2.4, set the database to use DL reasoning, and query all graphs.
I don't really know what wrong with my code or stardog, as I used to do the similar stream reasoning application in previous versions like 4.0-rc2, and never encountered this problem before.
Would you please help me out? Thank you very much!

Robert

EDIT:
I also attached my CommandParser code:

import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.DefaultParser;
import org.apache.commons.cli.Options;
import org.apache.commons.cli.ParseException;

public class CommandParser {
	
	private CommandLine cli;
	
	public CommandParser() {}
	
	public CommandParser(String commands) {
		setCli(parseCLI(commands.split(" ")));
	}
	
	private CommandLine parseCLI(String[] commands) {
		Options options = new Options();
		
		// streaming data schema parser
		options.addOption("s", true, "subject");
		options.addOption("p", true, "predicate");
		options.addOption("o", true, "object");
		options.addOption("g", true, "graph id");

		CommandLineParser parser = new DefaultParser();
		try {
			return parser.parse(options, commands);
		} catch (ParseException e) {
			err("command line arguments parsing failed");
			e.printStackTrace();
		}
		return null;
	}
	
	public void parse(String commands) {setCli(parseCLI(commands.split(" ")));}
	
	public String getOption(String opt) {
		if(cli != null && cli.hasOption(opt))
			return cli.getOptionValue(opt);
		return "";
	}
	
	// setter & getter
	public CommandLine getCli() { return cli; }
	private void setCli(CommandLine cli) {  this.cli = cli; }
	
	// helper function
	private void err(Object x) { System.out.println("[CommandParser ERR] " + x); }
}

The main problem I noticed with this is that it is never resetting the counter variable back to 0. If I do that and run, I seem to get different results:

query
[s=http://www.Department9.University0.edu/FullProfessor0]
all graphs cleared
all graphs cleared
all graphs cleared
all graphs cleared
query
[s=http://www.Department10.University0.edu/FullProfessor5]
query
[s=http://www.Department4.University0.edu/FullProfessor3]
query
[s=http://www.Department7.University0.edu/FullProfessor1]
all graphs cleared

Thank you, stephen.

I have corrected the counter problem, updated my data, and program a little bit. The majority of the code remains the same as before. However, I still cannot get the different results like you did. Allow me to show you my current code, results and database configuration metadata.

SnarlClient.java

package com.sibench.mini;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.util.Iterator;

import org.openrdf.model.Model;
import org.openrdf.model.Statement;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.rio.RDFFormat;

import com.complexible.common.rdf.model.Values;
import com.complexible.stardog.StardogException;
import com.complexible.stardog.api.Connection;
import com.complexible.stardog.api.ConnectionConfiguration;
import com.complexible.stardog.api.reasoning.ReasoningConnection;
import com.complexible.stardog.reasoning.Proof;

public class SnarlClient {

	private static String serverURL = "http://localhost:5820";  
	private static String dbName = "db";
	private static String password = "admin";
	private static String username = "admin";
	

	private ReasoningConnection aReasoningConn;
	private Connection aConn;

	public SnarlClient() {
		aReasoningConn = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).reasoning(true).connect().as(ReasoningConnection.class);
		aConn = ConnectionConfiguration.to(dbName).server(serverURL).credentials(username, password).connect();
		info("connected to " + serverURL + "/" + dbName);
		emptyDB();
		info("database initialized");
	}
	
	public void loadData(String path, RDFFormat f) {
		if(path != "") {
			try {
				this.aConn.begin();
				this.aConn.add().io().format(f).stream(new FileInputStream(path));				
			} catch (StardogException e) {
				err("data load failed: " + path);
				e.printStackTrace();
			} catch (FileNotFoundException e) {
				err("data file not found: " + path);
				e.printStackTrace();
			}
			this.aConn.commit();
			info("data loaded from " + path);	
		}
		else {
			info("data path is empty");
		}
	}
	
	// add a single model to triple-store
	public void addModel(Model m, String graph_id) {
		aConn.begin();
		aConn.add().graph(m, Values.iri(graph_id));
		aConn.commit();
	}
	
	// query
	public TupleQueryResult query(String queryString, boolean enableReasoning) {
		
		if(enableReasoning) {
			this.aReasoningConn.begin();
			TupleQueryResult result = this.aReasoningConn.select(queryString).execute();
			this.aReasoningConn.commit();
			return result;
		}
		else {
			return this.aConn.select(queryString).execute();
		}
	}
	
	// explain
	public Iterator<Proof> explain(Statement s) {
		return this.aReasoningConn.explain(s).computeNamedGraphs().proofs().iterator();
	}

	// delete graphs in triple-store
	public void deleteGraph(String graphs) {
		this.aConn.begin();
		// https://groups.google.com/a/clarkparsia.com/forum/#!searchin/stardog/sparql$20drop/stardog/5t8Q63w25w8/iLbQaPByFAAJ
		this.aConn.update("delete { graph ?g { ?s ?p ?o } } where { graph ?g {?s ?p ?o.} values ?g{ " + graphs + " }}").execute();
		this.aConn.commit();
	}		
	
	// empty triple-store
	public void emptyDB() {
		aConn.begin();
		aConn.remove().all();
		aConn.commit();
		info("database cleaned");
	}
	
	// clear all graphs in the current database
	public void clearAllGraphs() {
		this.aConn.begin();
		this.aConn.update("delete {graph ?g {?s ?p ?o}} where {graph ?g {?s ?p ?o}}").execute();
		this.aConn.commit();
		info("all graphs cleared");
	}
	
	// clean up everything
	public void cleanUp() {
		aReasoningConn.close();
		aConn.close();
		info("all connections closed.");
	}
	
	// helper function
	private void info(Object x) { System.out.println("[Stardog INFO]" + x);}
	private void err(Object x) { System.out.println("[Stardog ERR]" + x);}	
}

SIBenchMini.java

package com.sibench.mini;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.openrdf.model.Model;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.rio.RDFFormat;

import com.complexible.common.openrdf.model.Models2;
import com.complexible.common.rdf.model.Values;

public class SIBenchMini {
	
	public static void main(String[] args) throws IOException {
		String query = "PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> SELECT ?s WHERE { ?s rdf:type ub:Chair . }";
		SnarlClient client = new SnarlClient();
		client.loadData("./file/ontology/univ-bench.owl", RDFFormat.RDFXML);
		BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(new File("./file/data/data.rdfstream"))));
		String aDataLine = "";
		int counter = 1;
		while((aDataLine = br.readLine()) != null) {
			Pattern pattern = Pattern.compile("<([:.\\/\\-#A-Za-z\\d]+)>\\s<([:.\\/\\-#A-Za-z\\d]+)>\\s[<\"]?([@:.\\/\\-#A-Za-z\\d]+)[>\"]?\\s<([:.\\/\\-#A-Za-z\\d]+)>\\s(\\d+:[\\d.:]+)\\s?(\\d+:[\\d.:]+)?\\s?([\\d.]+)?");
			Matcher match = pattern.matcher(aDataLine);
			if(match.find()) {
				String s = match.group(1);
				String p = match.group(2);
				String o = match.group(3);
				String g = match.group(4);
				// create model
				Model dataModel = null;
				if(o.contains("http")) { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.iri(o))); }
				else { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.literal(o))); }
				client.addModel(dataModel, g);
			}
			if(counter % 303 == 0) { // load 303 triples each time
				System.out.println("iteration " + counter / 303);
				TupleQueryResult result = client.query(query, true);
				while(result.hasNext()) {
					System.out.println(result.next().toString());
				}
				result.close();
				client.clearAllGraphs();
			}
			counter++;
		}
		br.close();		
	}
}

My data and ontology reside here: Dropbox - File Deleted

My results are

[Stardog INFO]connected to http://localhost:5820/db
[Stardog INFO]database cleaned
[Stardog INFO]database initialized
[Stardog INFO]data loaded from ./file/ontology/univ-bench.owl
iteration 1: [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 2 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 3 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 4 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 5 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 6 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 7 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 8 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared
iteration 9 : [s=http://www.Department0.University0.edu/FullProfessor7]
[Stardog INFO]all graphs cleared

In SIBenchMini.java, I load 303 triples each time. The result at Iteration 1, http://www.Department0.University0.edu/FullProfessor7, is the correct answer. Then the program continues to dump all the named graphs in the database. (in this program, each triple read from data.rdfstream is wrapped in a unique graph id, so the number of triples is equal to the number of graphs. The ontology will not be dumped as it doesn't have an explicit named graph).

In data.rdfstream file, http://www.Department0.University0.edu/FullProfessor7 related triples are only in first 303 triples. This means that when load and query the next 303 triples, http://www.Department0.University0.edu/FullProfessor7 shouldn't be returned, as all of its related triples are dumped already. However, what turns out is that it keeps repeating, and this is the confusing problem.

I have set a break point right after client.clearAllGraphs(), which means the database should only contain ontology data, and is expected to return nothing. But the web console still returns http://www.Department0.University0.edu/FullProfessor7. I also tried in my terminal using stardog query --reasoning db "PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> SELECT ?s WHERE { ?s rdf:type ub:Chair . }", it returns and only returns http://www.Department0.University0.edu/FullProfessor7 as well, even if the database is empty!

I am so confusing that this problem keeps coming to me even if I have tested on different machines and operating systems, and why it works on your machine?
Did I do something wrong, so that the connection "cached" the first result and always returns it?
Sorry for such a long post, but I really wanted to provide as much information as possible so that you can better understand this situation.

Really appreciate your time and help!

Robert

I have also attached my database metadata configuration if it can help to diagnose the problem.

+-------------------------------------------+----------------------------------------------------------------------------------+
|                  Option                   |                                      Value                                       |
+-------------------------------------------+----------------------------------------------------------------------------------+
| database.archetypes                       |                                                                                  |
| database.connection.timeout               | 1h                                                                               |
| database.creator                          | admin                                                                            |
| database.name                             | db                                                                               |
| database.namespaces                       | rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#,                                 |
|                                           | rdfs=http://www.w3.org/2000/01/rdf-schema#,                                      |
|                                           | xsd=http://www.w3.org/2001/XMLSchema#, owl=http://www.w3.org/2002/07/owl#,       |
|                                           | stardog=tag:stardog:api:, =http://api.stardog.com/                               |
| database.online                           | true                                                                             |
| database.time.creation                    | 2017-04-14T13:17:46.182-04:00                                                    |
| database.time.modification                | 2017-04-18T14:47:37.259-04:00                                                    |
| docs.default.rdf.extractors               | tika                                                                             |
| docs.default.text.extractors              | tika                                                                             |
| docs.filesystem.uri                       | file:///                                                                         |
| docs.path                                 | docs                                                                             |
| icv.active.graphs                         | default                                                                          |
| icv.consistency.automatic                 | false                                                                            |
| icv.enabled                               | false                                                                            |
| icv.reasoning.enabled                     | false                                                                            |
| index.differential.enable.limit           | 1000000                                                                          |
| index.differential.merge.limit            | 10000                                                                            |
| index.differential.size                   | 0                                                                                |
| index.disk.page.count.total               | 123768                                                                           |
| index.disk.page.count.used                | 15                                                                               |
| index.disk.page.fill.ratio                | 0.4501708984375                                                                  |
| index.last.tx                             | 8aafb66d-b9c1-439e-8e8d-eea4c6c12433                                             |
| index.literals.canonical                  | true                                                                             |
| index.named.graphs                        | true                                                                             |
| index.persist                             | true                                                                             |
| index.persist.sync                        | true                                                                             |
| index.size                                | 585                                                                              |
| index.statistics.update.automatic         | true                                                                             |
| index.type                                | Disk                                                                             |
| preserve.bnode.ids                        | true                                                                             |
| progress.monitor.enabled                  | true                                                                             |
| query.all.graphs                          | true                                                                             |
| query.plan.reuse                          | ALWAYS                                                                           |
| query.timeout                             | 5m                                                                               |
| reasoning.approximate                     | false                                                                            |
| reasoning.classify.eager                  | true                                                                             |
| reasoning.consistency.automatic           | false                                                                            |
| reasoning.punning.enabled                 | false                                                                            |
| reasoning.sameas                          | OFF                                                                              |
| reasoning.schema.graphs                   | *                                                                                |
| reasoning.schema.timeout                  | 1m                                                                               |
| reasoning.type                            | DL                                                                               |
| reasoning.virtual.graph.enabled           | true                                                                             |
| search.default.limit                      | 100                                                                              |
| search.enabled                            | false                                                                            |
| search.index.datatypes                    | http://www.w3.org/2001/XMLSchema#string,                                         |
|                                           | http://www.w3.org/1999/02/22-rdf-syntax-ns#langString                            |
| search.reindex.mode                       | sync                                                                             |
| search.wildcard.search.enabled            | false                                                                            |
| security.named.graphs                     | false                                                                            |
| spatial.enabled                           | false                                                                            |
| spatial.index.version                     | 1                                                                                |
| spatial.precision                         | 11                                                                               |
| strict.parsing                            | true                                                                             |
| transaction.isolation                     | SNAPSHOT                                                                         |
| transaction.logging                       | false                                                                            |
| transaction.logging.ignore.startup.errors | true                                                                             |
| transaction.logging.rotation.remove       | true                                                                             |
| transaction.logging.rotation.size         | 524288000                                                                        |
| versioning.directory                      | versioning                                                                       |
| versioning.enabled                        | false                                                                            |
+-------------------------------------------+----------------------------------------------------------------------------------+

Hi Zach,

Even though you have withdrawn your post, you mentioned that I didn’t do it within one transaction, did you mean that instead of doing

	// clear all graphs in the current database
	public void clearAllGraphs() {
		this.aConn.update("delete {graph ?g {?s ?p ?o}} where {graph ?g {?s ?p ?o}}").execute();
		info("all graphs cleared");
	}

I should wrap it with aConn.begin() and aConn.commit(), like the following?

	// clear all graphs in the current database
	public void clearAllGraphs() {
		this.aConn.begin();
		this.aConn.update("delete {graph ?g {?s ?p ?o}} where {graph ?g {?s ?p ?o}}").execute();
		this.aConn.commit();
		info("all graphs cleared");
	}

I am a little big confused about transaction in stardog. In my case, when I am executing a SPARQL update query to delete triples, should I also keep an eye on the transaction thing? If so, what code should I add to make sure my transaction is good?

I withdrew the response because I believe that update queries will implicitly use a transaction if there isn’t currently one open so wrapping it in a transaction, as you have shown, should be equivalent. Unfortunately I don’t have an environment that I can use to help diagnose the problem from where I am so I’ll have to wait until I get home to really test it. For now I’m just trying to see if I can spot an error by looking at your code.

It can be frustrating but I’m sure we’ll figure it out. Aren’t computers fun? :slight_smile:

1 Like

Thank you, Zach.
Really appreciate your encouragement. Have you got a chance to test it yet? :slight_smile:
BTW, is there a way to download the legacy stardog versions, I would like to try some older versions and see if the problem persists.

Robert

Hi,

Are you using DL reasoning? If I switch mine over to DL I get the same output that you do. I’m finding out it might actually be related to a bug we were unable to reproduce earlier. I’m trying to narrow your data down to a reproducible test case so we can see what’s happening.

If you are using DL reasoning, if you switch it over to SL does the problem persist?

Hi Stephen,

Glad that my problem can help reproduce the bug. :slight_smile:

Yes, I was using DL reasoning. I have switched to SL reasoning, and the expected correct results are produced! It worked like a charm. You rock!

However, my application also requires to use reasoning explanation, which doesn’t work under SL. Let me show you my code with reasoning explanation.

SIBenchMini.java

package com.sibench.mini;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Iterator;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.openrdf.model.Model;
import org.openrdf.model.Statement;
import org.openrdf.query.BindingSet;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.rio.RDFFormat;

import com.complexible.common.openrdf.model.Models2;
import com.complexible.common.rdf.model.Values;
import com.complexible.stardog.reasoning.Proof;
import com.complexible.stardog.reasoning.ProofType;
import com.complexible.common.rdf.model.StardogValueFactory.RDF;

public class SIBenchMini {
	
	public static void main(String[] args) throws IOException {
		String query = "PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> SELECT ?s WHERE { ?s rdf:type ub:Chair . }";
		SnarlClient client = new SnarlClient();
		client.loadData("./file/ontology/univ-bench.owl", RDFFormat.RDFXML);
		BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(new File("./file/data/data.rdfstream"))));
		String aDataLine = "";
		int counter = 1;
		while((aDataLine = br.readLine()) != null) {
			Pattern pattern = Pattern.compile("<([:.\\/\\-#A-Za-z\\d]+)>\\s<([:.\\/\\-#A-Za-z\\d]+)>\\s[<\"]?([@:.\\/\\-#A-Za-z\\d]+)[>\"]?\\s<([:.\\/\\-#A-Za-z\\d]+)>\\s(\\d+:[\\d.:]+)\\s?(\\d+:[\\d.:]+)?\\s?([\\d.]+)?");
			Matcher match = pattern.matcher(aDataLine);
			if(match.find()) {
				String s = match.group(1);
				String p = match.group(2);
				String o = match.group(3);
				String g = match.group(4);
				// create model
				Model dataModel = null;
				if(o.contains("http")) { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.iri(o))); }
				else { dataModel = Models2.newModel(Values.statement(Values.iri(s),Values.iri(p),Values.literal(o))); }
				client.addModel(dataModel, g);
			}
			if(counter % 303 == 0) { // load 303 triples each time
				System.out.println("iteration " + counter / 303);
				TupleQueryResult result = client.query(query, true);
				while(result.hasNext()) {
					BindingSet bs = result.next();
					// newly added code --> explain the results using SL reasoning in database. 
					Iterator<Proof> proof_itr = client.explain(Values.statement(Values.iri(bs.getValue("s").stringValue()), RDF.TYPE, Values.iri("http://swat.cse.lehigh.edu/onto/univ-bench.owl#Chair")));
					while(proof_itr.hasNext()) {
						Proof aproof = proof_itr.next();
						aproof.getExpressions(ProofType.ASSERTED).forEach((e)->{
							Iterator<Statement> e_itr = e.iterator();
							e_itr.forEachRemaining((itr)->{
								System.out.println(itr.toString());
							});
						}); 
					}					
				}
				result.close();
				client.clearAllGraphs();
			}
			counter++;
		}
		br.close();		
	}
}

I have received the following error with SL reasoning explanation:

[Stardog INFO]connected to http://localhost:5820/db
[Stardog INFO]database cleaned
[Stardog INFO]database initialized
[Stardog INFO]data loaded from ./file/ontology/univ-bench.owl
iteration 1
Exception in thread "main" com.complexible.stardog.StardogException: Error getting content in response to inference explanation
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection.toProof(HttpReasoningConnection.java:301)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection.getExplainResult(HttpReasoningConnection.java:277)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection.executeExplain(HttpReasoningConnection.java:255)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection.access$100(HttpReasoningConnection.java:77)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$1.proofs(HttpReasoningConnection.java:115)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$1.proofs(HttpReasoningConnection.java:112)
	at com.complexible.stardog.reasoning.AbstractStardogExplainer.proofs(AbstractStardogExplainer.java:62)
	at com.sibench.mini.SnarlClient.explain(SnarlClient.java:81)
	at com.sibench.mini.SIBenchMini.main(SIBenchMini.java:53)
Caused by: java.lang.NullPointerException
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$ProofListDeserializer.toProofNode(HttpReasoningConnection.java:341)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$ProofListDeserializer.toProofNodes(HttpReasoningConnection.java:318)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$ProofListDeserializer.deserialize(HttpReasoningConnection.java:310)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection$ProofListDeserializer.deserialize(HttpReasoningConnection.java:305)
	at com.google.gson.TreeTypeAdapter.read(TreeTypeAdapter.java:58)
	at com.google.gson.Gson.fromJson(Gson.java:795)
	at com.complexible.stardog.protocols.http.reasoning.client.HttpReasoningConnection.toProof(HttpReasoningConnection.java:298)
	... 8 more

However, DL reasoning will return explanation as follows:

[Stardog INFO]connected to http://localhost:5820/db
[Stardog INFO]database cleaned
[Stardog INFO]database initialized
[Stardog INFO]data loaded from ./file/ontology/univ-bench.owl
iteration 1
(http://www.Department0.University0.edu, http://www.w3.org/1999/02/22-rdf-syntax-ns#type, http://swat.cse.lehigh.edu/onto/univ-bench.owl#Department) [null]
(http://www.Department0.University0.edu/FullProfessor7, http://swat.cse.lehigh.edu/onto/univ-bench.owl#headOf, http://www.Department0.University0.edu) [null]
(http://swat.cse.lehigh.edu/onto/univ-bench.owl#publicationAuthor, http://www.w3.org/2000/01/rdf-schema#range, http://swat.cse.lehigh.edu/onto/univ-bench.owl#Person) [null]
(http://www.Department0.University0.edu/FullProfessor7/Publication7, http://swat.cse.lehigh.edu/onto/univ-bench.owl#publicationAuthor, http://www.Department0.University0.edu/FullProfessor7) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_398, http://www.w3.org/2002/07/owl#intersectionOf, _:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_399) [null]
(http://swat.cse.lehigh.edu/onto/univ-bench.owl#Chair, http://www.w3.org/2002/07/owl#equivalentClass, _:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_398) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_400, http://www.w3.org/1999/02/22-rdf-syntax-ns#rest, http://www.w3.org/1999/02/22-rdf-syntax-ns#nil) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_400, http://www.w3.org/1999/02/22-rdf-syntax-ns#first, http://swat.cse.lehigh.edu/onto/univ-bench.owl#Person) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_399, http://www.w3.org/1999/02/22-rdf-syntax-ns#rest, _:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_400) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_398, http://www.w3.org/1999/02/22-rdf-syntax-ns#type, http://www.w3.org/2002/07/owl#Class) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_399, http://www.w3.org/1999/02/22-rdf-syntax-ns#first, http://swat.cse.lehigh.edu/onto/univ-bench.owl#DepartmentHead) [null]
(http://swat.cse.lehigh.edu/onto/univ-bench.owl#DepartmentHead, http://www.w3.org/2002/07/owl#equivalentClass, _:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_401) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_401, http://www.w3.org/1999/02/22-rdf-syntax-ns#type, http://www.w3.org/2002/07/owl#Restriction) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_401, http://www.w3.org/2002/07/owl#onProperty, http://swat.cse.lehigh.edu/onto/univ-bench.owl#headOf) [null]
(_:bnode_773b6d6d_cc0e_4ae8_85e9_9f4d7d9359cb_401, http://www.w3.org/2002/07/owl#someValuesFrom, http://swat.cse.lehigh.edu/onto/univ-bench.owl#Department) [null]

So my situation:
using DL won’t produce correct results, but reasoning explanation works
using SL will produce correct results, but reasoning explanation doesn’t work.

Might this be also related to the bug that you mentioned?

Thank you,
Robert

Hi again,

Glad to see you were able to get results! The reasoning explanation bug is NOT related to the DL bug; it is related to a couple other existing bugs for which there is currently no workaround. I don’t want to tell you to run two databases (one DL and one SL) so you can do both things during the same run, but at the same time I’m not sure if there is a better way at this point. These bugs should be addressed during the upcoming 5.x release cycle, but I can’t provide a solid timeline for that either.

Thank you Stephen,

This is already good enough. I think I am able to use some tricks to solve the DL/SL thing.
Really appreciate your and Zach’s help and time. I can now close this post.

Expecting the Stardog 5.X.

Robert

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.