Now I realize what was happening with blank nodes and our cluster, and it makes me a little "scared" of blank nodes.
I would like to know if this is the correct behavior of a 3 nodes cluster with a HA load balancer, or if we did something wrong.
Each time I execute a query by pressing "Execute" on the web console, the response is obviously given by each cluster-node one after another.
I did load my rule, as described here above, the rule being:
a rule:SPARQLRule ;
I do have only one rule in the store. When I execute 3 consecutive time the query:to get the instances of rule:SPARQLRule, I will have each time a new resultset with one anser:
first run:
_:bnode_37f16220_602f_4ce7_9174_35d9d292ca7c_80
second run:
_:bnode_37f16220_602f_4ce7_9174_35d9d292ca7c_81
third run:
_:bnode_37f16220_602f_4ce7_9174_35d9d292ca7c_79
Which means that, on each server, the same rule:SPARQLRule did get a different bNode id. (if I keep on executing the query I will get one of those ID over and over again).
The DB is configured with Preserve BNode identifiers =ON, but this seems to have no effect here.
And this is where my strange behavior of my first post here happens: the rule is in a specific Named Graph, and I wanted to remove the rule by doing a
DELETE where {GRAPH <thegraph> {?s ?p ?o}}
And so, what was happening, is that the blank node of the rule was effecively delete from the cluster-node that was answering the query, but was not replicated on the other nodes, certainly because they are different resources/triples.
And then, when I execute 3 consecutive time the query:to get the instances of rule:SPARQLRule, I would expect to have no answer (empty resultset), but I will have two times the bnode corresponding to the answering cluster-node, and one time an empty result-set (when answered by the cluster-node on which the delete was performed).
Isn't this becoming a "nightmare" to handle triples with bNodes on a cluster ?
I am not used to work with bnodes, so maybe there is an easy workaround or guidelines that must be followed if we don't want the cluster to become a mess when updating datasets with bNodes?
Thank you
Fabian