Service Discovery in SingularityNET — Some Rough Notes

Benjamin Goertzel
Ben Goertzel on SingularityNET
6 min readNov 12, 2017

This (fairly technical-ish) post is a description, at a high level, of how I am currently envisioning aspects of the “service discovery” process to work in the commercially launched scalable version of SingularityNET. Not all of this will be supported in the early prototype versions.

Reader beware: This is a set of rough notes and not a formal description of functionality. I may edit this post soon after posting, based on feedback from other SingularityNET developers. But a lot of people have been asking about these aspects of SingularityNET lately, so I wanted to share my current thinking more elaborately than has been done so far. The stuff written here should be taken at the level of “stuff scribbled at the whiteboard during an informal tech meeting.” For example the syntax and particulars used here is not fully consistent with the current SingularityNET prototype — but it’s conceptually consistent…

Suppose a customer has a query like “I have a stream of images coming into my website each day, and I want to identify the faces in each image.”

First step is for the customer to choose an “NLP2API” Agent, which will translate this query into a formal request according to one or more APIs in the network.

The simplest way for this first step to get carried out, is for the smart contract embodying the customer’s request to contain code that does as follows: Finds the least expensive NLP2API Agent that supports the English language and has reputation above a certain limit (say, 4 stars out of 5), and submits the query there. If that Agent can’t find a result, then choose the next-best Agent, etc. Give up if a certain number of NLP2API Agents are tried and no result is found.

An NLP2API Agent will take the natural language query request and turn it into a formal API request in one or more APIs existing in the system. So for instance, in the example, we’re looking at, we could end up with something conceptually equivalent to:

O = SNet-Foundation-Ontology-1.3

S = interpret-term(O,”stream”)

S.payload-type = interpret-term(O,”images”)

S.payload-rate-limit = interpret-term(O, “4ms”)

F = interpret-term(O,”human face”)

P = interpret-term(O, “identify and label”)

T = eval(P,S,F)

The task here is T, which is an “identify and label” task, applied to the stream S which is an “image stream” containing images flowing at 4ms per image or less, and involving identifying and labeling entities matching the description “human face.”

The particulars of the actual language to be used for this sort of representation are still being worked out, and it surely won’t look syntactically like the above in the final deployed system. However, the semantics will be similar to the above. Note that this sort of relationship-set can easily be formalized in the OpenCog Atomspace, enabling various sorts of reasoning to be done against it.

The process of going from an NLP request to a formal request may be invoked by a human using a graphical UI or an interactive command-line interpreter, or by a script running automatically on the back end of an application. In the former case, a human customer (a developer of a software system that will potentially use SingularityNET) will use a UI to figure out what formal query their software should submit to SingularityNET, and possibly also use that same UI to figure out what AI Agents their software should use. In the latter case, the software written by the human customer just submits the NLP query automatically to the SingularityNET, allowing the SingularityNET to covert their NLP query to a formal request as part of its internal operations.

Application developers preparing their software to use SingularityNET may also bypass the NLP2API process entirely, via simply scripting their software to submit formal requests. This should not be any more difficult than using any other AI toolkit accessible via API. From this point of view the NLP translation process is more a nice-to-have than a must-have. However, due to the decentralized nature of SingularityNET, it does appear especially desirable to have this sort of mechanism on hand. In SingularityNET, it is always possible that someone has inserted some funky new way of carrying out what one needs, and NLP interpretation may be a more flexible way of finding new AI Agents that can do what one actually needs rather than matching the way one has formalized what one needs. On the other hand, fuzzy matching of formal requests will also be possible, and the balance of precise matching of formal requests, fuzzy matching of formal requests, and matching of NLP requests that will actually be used in a large and flourishing SingularityNET, is something that remains to be determined.

The example above uses an ontology “SNet-Foundation-Ontology-1.3” to define the various terms in the formal request. In general the SingularityNET may contain multiple ontologies, each containing a common set of terms that different Agents can use for communication. Initially the SingularityNET Foundation will supply a standard Ontology but the plan is that ultimately ontologies will be proposed and maintained in a fully decentralized way. In the workflow described here, the choice of which ontologies to be used in creating formal requests is to be made by the NLP2API Agents. Such Agents will need to support the ontologies in wide use among the AI Agents performing tasks, or they won’t be useful. Anyone proposing a new ontology will do well to provide NLP2API Agents utilizing it, or to convince existing and popular NLP2API Agents to utilize it.

Once there is a formal request, it can be matched against the APIs supported by various AI Agents in the network. This matching will be done by a Discovery Agent. Initially, the SingularityNET Foundation will supply Discovery Agents that will carry out requests free of charge. Later on others may supply alternative Discovery Agents, operating either free of charge or for a fee.

To briefly elaborate the kind of matching to be carried out by Discovery Agents, consider again the task in the above example, which may be summarized as

eval(“identify and label”_SNFO1.3, “stream.image”_SNFO1.3, “human face”_SNFO1.3)

Supposing that a certain AI Agent A123 supports an API that is described according to SNet-Foundation-Ontology-1.3 as

eval(“identify and describe”_SNFO1.3, “stream.image”_SNFO1.3, “human face”_SNFO1.3)

Then based on the logical inheritance

identify and describe ==> identify and label

it may be concluded that A123 can fulfill the request in question.

On the other hand, suppose that a certain AI Agent A345 supports an API that is described according to SNet-Foundation-Ontology-1.3 as

eval(“identify and describe”_SNFO1.3, “stream.image”_SNFO1.3, “object”_SNFO1.3)

Then based on the logical inheritances

identify and describe ==> identify and label

human face ==> object

it could be concluded that A345 can fulfill the given request also.

But here the subtlety of the SingularityNET rating system comes into play. Sometimes algorithms that are good at generic object labeling in images, are still not the best at recognizing faces in images. So then it would be intelligent to check whether there is information about the reputation of A345 specifically for tasks such as

eval(“identify and describe”_SNFO1.3, “stream.image”_SNFO1.3, “human face”_SNFO1.3)

or

eval(“identify and label”_SNFO1.3, “stream.image”_SNFO1.3, “human face”_SNFO1.3)

The Agents’ reputations in regard to these various task descriptions, with their varying levels of specificity, must be intelligently merged together.

In this example the inference involved in matching customer requests with Agents’ task descriptions is quite simple. In other cases the descriptions and inferences may get more complex. Ultimately we arrive at the problem of type inheritance in functional programming languages with dependent types. Fortunately, though, this problem is already solved with reasonable generality, and solutions are embodied in languages such as Agda. This inheritance problem can be done fairly straightforwardly using OpenCog’s URE rule engine. The merging of reputation scores associated with different tasks, with various relationships to each other, can be addressed by OpenCog’s PLN logic engine, which can take into account the probabilistic overlaps between functions and concepts.

--

--