Transport UDFs good idea or insanity?

I was interested in peoples thoughts on a project called Transport GitHub - linkedin/transport: A framework for writing performant user-defined functions (UDFs) that are portable across a variety of engines including Apache Spark, Apache Hive, and Presto.

I can't decide if it's brilliant or insane. It would require writing a custom transformer for Stardog but the advantage would be being able to take advantage of any function written for the framework.

It seems a bit silly to have to reimplement basic functions for platform after platform after platform. I like the idea so I checked it out but see the implementation and it just looks like insanity. What seems like a more sane and semantic web oriented solution would be the function ontology. You'd still have to write platform specific code generation modules but it seems like it would be a more solid foundation. I think that's essentially what Transport is doing but with a very convoluted custom language.

I would also like to experiment with deploying SPARQL functions to target OpenFAAS. It might be a nice way to deploy functions quickly, especially functions that might be more CPU intensive that would benefit from automatic scaling. I can't decide if I should write a specific OpenFAAS targeted function or a OpenFAAS wrapper that exposes the function as a SPARQL endpoint instead of a rest web service.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.