KernelHttpService and kernel methods

Ok, so I'm digging in deeper into Stardog :wink:

I'm writing a custom HttpService, specifically the KernelHttpService. It's in the docs so you only have yourselves to blame for my questions.

What's the difference between the kernel authenticate and the login methods? Authenticate returns Options so I'm a little confused about what it does. Do I need to handle logging into the kernel or is that handled by the call to ShiroUtils? In my case ShiroUtils.requireSuperuser().

I also noticed that internal query api is somewhat different from the public stark api. Is that just a legacy from pre-stark days or is there possibly a performance difference to using a different api?

Authentication will be performed before your service is called. However, you need to check that the authenticated user (which iirc may be null) is authorized to use your service. So ShiroUtils.requireSuperuser() is sufficient.

Which internal API are you talking about? Stark is used extensively internally.

I'm trying to make a connection and run a query so I'm doing

mKernel.get().getConnection("mydb", Options.empty()).createQuery("myquery....")

that returns a com.complexible.stardog.query.QueryFactory instead of a com.stardog.stark.query.QueryFactory and I can't quite figure out what the execute method on the QueryFactory is returning. It looks like it's returning Object. Can I just cast that to a SelectQueryResult?

Ok, I see. The DatabaseConnection returned by getConnection() is an internal interface. The createQuery() method should return Query<T> and you can use SelectQueryResult as the type parameter, eg Query<SelectQueryResult> aIndexQuery = mQueryFactory.createQuery(aQuery, Namespaces.STARDOG);. At this point, you can deal with the query in the same way as the client API.

What kind of service are you building?

I'm pulling the thread on the idea of building packages for a single function. I'm not quite sure why I'm so focused on functions but they're just so damn handy. I can easily see there being hundreds of them and I have some other ideas that would require supporting a large number of functions.

If you're going to have that many functions there needs to be a good way to install and manage them. In addition it would be nice to have some way to get some documentation on the function as well. This wouldn't be exclusive to functions but should support arbitrary plugins. I'm just starting with functions.

The idea is to write a service and a cli plugin, (I know it's not officially supported but I'm fairly sure it can be done). The user might say, "hey I really could use a special function here" and they possibly do a search for it on a central registry. Something like http://plugins.stardog.com and search for what you need and it happends to be the http://semantalytics.com/stardog/function/inet-ntoa which implements the same function as the mysql INET_ATON function.

To install the user would run `stardog-admin ext install http://semantalytics.com/stardog/function/inet-ntoa

This would use the /ext/install?uri=http%3A%2F%2Fsemantalytics.com%2Fstardog%2Ffunction%2Finet-ntoa service.

I'm using ext to make sure it stays out of the way of any potential future stardog services and make sure it's clear that it's an extension and not a stardog bultin.

The service would check to see if $STARDOG_HOME/ext directory exists and if not create it. Then create a database called ext and dereference the function IRI. The RDF result would be put into a named graph in ext. Then the database would be queried for a jar file that implements the function and download that to the ext directory and maybe record some information. The query would be something like `select ?jar where { ?function :stardogImplementation ?jar }. In addition to the jar file location it would include the sha1 of the jar and documentation on the function.

There would possibly be a parameter to restart the db after install. I know I can't restart it but I can shut it down and if it was being managed by systemd it might be automatically restarted.

It should also handle upgrades so you don't accidentally put two jars implementing the same function on the class path.

It would need a similar uninstall service.

I thought about using BITES where the service would call the BITES service to manage the jar and set the STARDOG_EXT to $STARDOG/ext/docs/ but it doesn't quite work since it's one directory below that and you'd have to somehow scan that directory to build up STARDOG_EXT and even that wasn't quite right. It just got hackish so I went back to just having a separate install service and let bites be bites.

Hopefully that isn't too crazy and idea. Let me know what you think.

Sounds reasonable and yes BITES is not the right thing for this. I was following you up to creating a new database and storing something in a named graph. What actual data is present/necessary here? It sounds like it could just as easily be implemented outside of Stardog. I'm thinking of RPM repository tools like yum which can search the package list and install packages.

You could use rpm/yum but that only works for redhat based systems and now you're having to do yum, apt, something for osx. You could also say why not just use maven but then you have to reference your function by IRI when using it and some other identifier like maven coordinates. It seems silly to have to remember two different identifiers. It would also be nice to have some easy way to access documentation while writing queries. "What are the args for this function?" Fire up a new tab in Studio and query select ?doc where { <myfunction :docs ?doc }

The data that would be contained in the named graph might be sha1 so you can verify the download, link to previous version so you can remove an old version, documentation, function arguments, etc.. possibly the IRI of a remote sparql endpoint that implements the function if you don't want to install it locally and call it remotely instead.

I wasn't suggesting using RPM or Maven, just pointing out the analogy. For instance, what if your system was implemented outside of Stardog? You could even create a CLI implementation for stardog-admin but without having to do anything in the server. It sounds like you just need to download and fetch the package list, install the jars, etc.

That's basically the idea but the user has no way of knowing what plugins/functions are installed other than asking the admin if they installed them or just trying them and seeing it it fails. If there a re a large number of them that's totally unreasonable.

I could just fetch a package list which I'm basically proposing but using RDF for the package list. If it was done outside of Stardog I'd still need a database and it seems unnecessary to install a second instance of Stardog just to track installed packages and a small amount of metadata about them. Not using a database to track it sounds like the mistake the Maven made and paid the price tacking hack index ontop of hack index where even Sonatype has finally decided to use, gasp, a graph database.

You can do that without writing and maintaining a server extension.

What would you suggest that would be a better solution? Also, what would you consider a good use for a server extension to be?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.