Error message for maximum file length exceeded

Hello.

I am using the free stardog cloud. When I try to run a query either from studio, I am seeing the following error message:
"Failed to run query: com.complexible.stardog.plan.eval.operator.OperatorException: Uncaught error during query evaluation: IOException: Maximum file length exceeded: 0B"

When try to run the query from the Python API I similarly see the error message:
"StardogException: [500] 000012: com.complexible.stardog.plan.eval.operator.OperatorException: Uncaught error during query evaluation: IOException: Maximum file length exceeded: 0B"

Does this mean the results being returned is exceeding a quota for the free account?

Thanks.
Scott

Adding some error messages by request of stardog support:

I have put a LIMIT command on the SQL in the SMS file to return less results. running SPARQL in Studio seems to be fine. I am now seeing inconsistent results from the SPARQL using the python API. The API is what I am most interested in using. The inconsistency is in the results being returned. Sometimes it returns less results than I know exist. Sometimes I get an error message that says:

StardogException: [500] 000012: com.complexible.stardog.plan.eval.operator.OperatorException: Unable to execute virtual graph query. SQL string: SELECT "reported_events"."id" AS "rid", "interventions"."id" AS "iid"
FROM "interventions"
INNER JOIN "reported_events" ON "interventions"."n_id" = "reported_events"."n_id"
FETCH NEXT 500000 ROWS ONLY

FYI...I am using a different userid for accessing stardog from the API than the admin id I use in studio. I am doing this because when I have tried to use the admin, id I get a 401 error message (which is an unauthorized access error). I did give the new id full access permissions, so this shouldn't be an issue.

What is the 500 error message mean?

Hi Scott,

The 500 means some exception happened while your query was being executed. In this case (I looked at your logs) it looks like Stardog's JDBC connection pool had a closed connection in it and this error occurred when we attempted to reuse it.

Can you confirm that this was a one-time error? Or is it the case that the queries that you ran from Studio always fail and the ones from Python always succeed (or any other repeatable pattern like that)? (Studio adds a LIMIT 1000 to all its queries so that could be an explanation.)

If it's the closed connection case, you can add these properties to your data source configuration to test the connection before each use:

testOnBorrow=true
validationQuery=SELECT 1

-Paul

Hi Paul,

I added the properties you suggested to the data source configuration. Let's see what happens. The errors I have seen are from both Studio and using the Python API. Since a LIMIT 1000 is added to all queries, does that mean that 1000 results are the maximum for Studio and the API? If so, does this only apply to the free tier? I have some queries I want to return millions of results, are you saying that this is not allowed?

Thanks.
Scott

The config settings you suggested adding seem to have stopped the connection failures to the data source.

So my question about the data result limit still remains. I have a further example:

When running the following sparql query in both Studio and The API:

select ?study ?condition_name
{
** ?study ctgo:studiesCondition ?condition .**
** ?condition rdfs:label ?condition_name .**
}

produces the following error message:
StardogException: [500] 000012: com.complexible.stardog.plan.eval.operator.OperatorException: Uncaught error during query evaluation: OutOfMemoryError: GC overhead limit exceeded

Is this based on limits for the free tier?
Because when I post the equivalent SQL query to the database API, It returns over 795K results in just under 2 seconds. I have downloaded and installed stardog on my machine and I am waiting for a license key to be sent to me. I would be more than happy to run stardog on my machine vs. on your cloud. That could solve two problems: 1.) wouldn't be hitting your cloud service with requests. 2.) Allow me to continue my research without being stifled.

Thanks.