Dictionary corrupted

Hi all,

My first post (take it easy, ok?!)…

I was in the midst of updating our triplestore here at NCI with our most recent data. Shutdown the server, restarted (all for good measure).

Then the following stacktrace was thrown. The only thing I could do was to drop the DB, recreate, and then load again - which worked. Just bringing this to your attention for any possible diagnosis.


[triplestore@ncidb-d174-v bin]$ ./stardog-admin server start
Loading Databases: 50% complete in 00:00:19
WARN 2018-08-01 12:23:03,138 [main] com.complexible.stardog.dht.dictionary.HashDictionary:(279): Dictionary is corrupted, will try to repair automatically
java.io.IOException: Cannot read header: 11094 != 8719617
at com.complexible.stardog.dht.impl.PagedDiskHashTable.readFreeOverflowPages(PagedDiskHashTable.java:436) ~[stardog-5.2.1.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.readFooter(PagedDiskHashTable.java:361) ~[stardog-5.2.1.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.read(PagedDiskHashTable.java:299) ~[stardog-5.2.1.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.(PagedDiskHashTable.java:205) ~[stardog-5.2.1.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTableBuilder.build(PagedDiskHashTableBuilder.java:161) ~[stardog-5.2.1.jar:?]
at com.complexible.stardog.dht.dictionary.HashDictionary.(HashDictionary.java:269) [stardog-5.2.1.jar:?]
at com.complexible.stardog.index.disk.DefaultDiskIndexReader.toMappingDictionary(DefaultDiskIndexReader.java:146) [stardog-5.2.1.jar:?]
at com.complexible.stardog.index.disk.DefaultDiskIndexReader.read(DefaultDiskIndexReader.java:133) [stardog-5.2.1.jar:?]
at com.complexible.stardog.index.io.IndexIO.read(IndexIO.java:320) [stardog-5.2.1.jar:?]
at com.complexible.stardog.db.DatabaseFactoryImpl.read(DatabaseFactoryImpl.java:126) [stardog-5.2.1.jar:?]
at com.complexible.stardog.db.DatabaseFactoryImpl.read(DatabaseFactoryImpl.java:55) [stardog-5.2.1.jar:?]
at com.complexible.stardog.StardogKernel.openDatabase(StardogKernel.java:2583) [stardog-5.2.1.jar:?]
at com.complexible.stardog.StardogKernel.initDatabases(StardogKernel.java:2316) [stardog-5.2.1.jar:?]
at com.complexible.stardog.StardogKernel.start(StardogKernel.java:2251) [stardog-5.2.1.jar:?]
at com.complexible.stardog.StardogKernel.initialize(StardogKernel.java:789) [stardog-5.2.1.jar:?]
at com.complexible.stardog.Stardog.initKernel(Stardog.java:219) [stardog-5.2.1.jar:?]
at com.complexible.stardog.Stardog.(Stardog.java:211) [stardog-5.2.1.jar:?]
at com.complexible.stardog.Stardog.(Stardog.java:65) [stardog-5.2.1.jar:?]
at com.complexible.stardog.Stardog$StardogBuilder.create(Stardog.java:572) [stardog-5.2.1.jar:?]
at com.complexible.stardog.cli.impl.ServerStart.call(ServerStart.java:149) [stardog-cli-5.2.1.jar:?]
at com.complexible.stardog.cli.impl.ServerStart.call(ServerStart.java:44) [stardog-cli-5.2.1.jar:?]
at com.complexible.stardog.cli.CLIBase.execute(CLIBase.java:55) [stardog-cli-5.2.1.jar:?]
at com.complexible.stardog.cli.admin.CLI.main(CLI.java:189) [stardog-cli-5.2.1.jar:?]

WARN 2018-08-01 12:25:04,570 [main] com.complexible.stardog.db.DatabaseFactoryImpl:read(141): Database NCIEVS is invalid and the repair failed: null

Loading Databases: 100% complete in 00:02:21
Loading Databases: 100% complete in 00:02:21

Loading Databases finished in 00:02:21.761


Just kidding. Nice to see you here. NCI? National Cancer Institute? It would be interesting to hear what you’re doing with Stardog.

What version were you upgrading from? Did it shutdown cleanly? You might want to check your disk space to make sure you hadn’t accidentally filled the drive that it’s on.

We’re currently supporting a Clinical Trials Search API, and using it as the Triplestore of choice for our new EVS RESTAPI.

Stardog Version 5.2.1.

This error occurred after updating the graphs in each DB. 11G used on the mount with 88G free. Shutdown command hung and had to kill the server. Logs are set to rotate, and I don’t see them as being a space hog.

We do the following for our data updates (ignore params that don’t look like links, I can only post up to 5 right now):

./stardog data remove -g …ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl NCIEVS -u $USER -p $PASSWORD
./stardog data add NCIEVS -g …ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.owl .owl -u $USER -p $PASSWORD
./stardog data remove -g …ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.rdf NCIEVS -u $USER -p $PASSWORD
./stardog data add NCIEVS -g …ncicb.nci.nih.gov/xml/owl/EVS/Thesaurus.rdf .rdf -u $USER -p $PASSWORD
./stardog data remove -g …NCIt CTRP -u $USER -p $PASSWORD
./stardog data add CTRP -g …NCIt .owl -u $USER -p $PASSWORD
./stardog data remove -g …NCIt_Flattened CTRP -u $USER -p $PASSWORD
./stardog data add CTRP -g …NCIt_Flattened .rdf -u $USER -p $PASSWORD

So since the startup threw the error, I did the following:

./stardog-admin db drop NCIEVS
./stardog-admin db drop CTRP
./stardog-admin db create -u ****** -p **** -n NCIEVS
./stardog-admin db create -u ***** -p ***** -n CTRP

and then ran our update script above. Startup was then clean. Everything seems to be fine now.

All relevant params in properties file:

query.all.graphs = true
logging.audit.enabled = true
logging.audit.type = text
logging.audit.rotation.type = size
logging.audit.rotation.limit = 1000000
leading.wildcard.search.enabled = true;
search.enabled = true;
search.wildcard.search.enabled = true;


We have had trouble upgrading from 5.1.0 to 5.3.x. Always ended up corrupted. We have discovered that running optimize before upgrading helps prevent corruption.


Also, if you restore from disk (block based) backup, you have to insert a statement into stardog (anything will do) otherwise optimize will crash.

This makes sense. We just upgraded on our Dev tier. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.