NaN confidence values in machine learning models - advice for tracking? down problems?

Yesterday I built a similarity model using 7 numeric features and it worked fine.

I extended the model (setting spa:overwrite True) by adding 13 more numeric features and 4 text features. I now always get NaN for confidence values when using the model.

I tried removing the 4 text features, and I still get all NaN confidence values.

What version of Stardog are you running? What platform are you running it on? (Docker, OSX, windows, etc)

Can you check the stardog.log logs for any errors or warnings? In the mean time you can try explicitly deleting your model rather than overwriting it and see if that clears things up although it could have something to do with the features that you added. If you have the time you might want to play around with a subset of your features to see if you can narrow it down to a particular feature. You can also build a new model with 6 features and see it you can use overwrite to create a new model with the full 7 and that might help narrow it down to a problem with overwrite rather than the features.

  graph spa:model {
      [] spa:deleteModel :myModel .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.