Upgrading from 5.1.0 til 5.3.1 - corrupt database

Hi,

Upgrading from 5.1.0 til 5.3.1 gives the following error in our log and the data is moved to .unusuable:

INFO  2018-06-08 13:12:42,900 [main] com.complexible.stardog.StardogKernel:start(2317): Initializing Stardog
WARN  2018-06-08 13:12:46,739 [main] com.complexible.stardog.index.disk.btree.impl.AbstractKeys:<init>(49): Invalid index metadata: 23 > 14
WARN  2018-06-08 13:12:46,741 [main] com.complexible.stardog.index.disk.btree.impl.AbstractKeys:<init>(49): Invalid index metadata: 18 > 4
WARN  2018-06-08 13:12:48,273 [main] com.complexible.stardog.index.disk.btree.impl.AbstractKeys:<init>(49): Invalid index metadata: 23 > 14
WARN  2018-06-08 13:12:48,274 [main] com.complexible.stardog.index.disk.btree.impl.AbstractKeys:<init>(49): Invalid index metadata: 18 > 4
WARN  2018-06-08 13:12:48,274 [main] com.complexible.stardog.db.DatabaseFactoryImpl:read(141): Database testSesame is invalid and the repair failed: Inconsistent index size for SPOC; expected=829496137, got= 829043393
INFO  2018-06-08 13:12:48,275 [main] com.complexible.stardog.StardogKernel:initDatabases(2394): Database testSesame will not be available because there was an error initializing the database: Inconsistent index size for SPOC; expected=829496137, got= 829043393
INFO  2018-06-08 13:12:58,115 [main] com.complexible.stardog.StardogKernel:handleUnusableIndex(2449): Moving irreparable database testSesame to /home/stardog/data/.unusable/testSesame
``

I did optimize and repair while running 5.1.0. Then upgraded to 5.3.1. This seemed to help.

Why does repairing with 5.1.0 work, but fail with 5.3.1 on the same database?

@stephen you have any ideas why this is happening?

Apologies for the long delay on this!

This particular DB goes back to at least the 4.x days, does it not? This could be an edge case related to something left over from the 4.x -> 5.x migration. Am I understanding you correctly that if you optimize/repair in 5.1.0 and then upgrade everything is fine?

We went into production in February this year. From my records we might have been running 5.0.5.1 at the time, but nothing older.

optimize+repair did make the upgrade to 5.3.1 from 5.1.0 work, however it would require quite a lot of downtime for our production system.

I seem to be unable to recreate this issue locally, though to simulate a real production database’s use would take some time. Are you able to backup the database in 5.1.0 and then restore it on a fresh 5.3.1 perhaps?

Haven’t tried that. However we have tried backup and restore from production 5.1.0 to our test server with 5.1.0. That did not work. Our production guys talked to someone at Stardog about it and were recommended to upgrade to 5.3.0, which is where we got stuck.

Håvard

Håvard,

If you are able to recreate the database in a 5.3.1 test server, what about doing so and then replacing the home directory on the production server? This would only require down time in the form of restarting the production server.

We would loose all the transactions on the production server during that interval. Unless there is a quick way of disabling all writes to the database.