SingularityNET AI Platform 2024 Roadmap

Albina Pomogalova
SingularityNET
Published in
44 min readFeb 27, 2024

Introduction

Over the last few years, we have been pushing the boundaries of decentralized AI, developing a comprehensive suite of innovative tools and products designed to develop and utilize AI services in ways that prioritize decentralization, safety, scalability, and benefits to all — developing the SingularityNET decentralized AI Platform.

The core goal of our Platform remains the same as it was in 2017 when we founded SingularityNET as the first truly decentralized AI network: To create a foundation suitable for running AGI systems with general intelligence at the human level and beyond in a secure, efficient, easily usable and fully decentralized way, without any central owners or controllers. Along the way, as our AI systems gradually move toward full AGI capabilities, the platform must also provide a high-quality decentralized infrastructure for AI applications serving diverse vertical markets.

One thing this mandate means is that — unlike most of the more recent entrants in the decentralized AI space — the Platform cannot be specialized to any particular class of AI algorithms or data types, nor any particular vertical market application. If it is going to serve as the decentralized infrastructure for the global economy as the world enters the AGI phase, it must be far more generic and flexible than that.

To meet these goals, the Platform does much more than aggregate multiple AI services on a single decentralized network, and functions as an end-to-end ecosystem of diverse, complex processes and interconnected components. These include the Marketplace and API, Publisher Portal, Developer Portal, SDKs, CLI, Daemon, and advanced smart contracts. Each component addresses several challenges, ranging from showcasing AI services and their respective capabilities to enabling integration into third-party applications.

The Platform will orchestrate services, payments and hosting in a decentralized way, integrating horizontal blockchain layers and vertical implementations. Following this vision, the Platform architecture currently foresees AI models to be hosted not only on a serverless infrastructure currently provided as an easy and risk-free service by the Platform, but also on fundamentally decentralized infrastructures, such as HyperCycle, NuNet and ICP (Internet Computer Protocol). All this will be bound together by ‘AI Deployment Infrastructure-as-a-Service,’ making the deployment and hosting process both easy and extremely flexible and powerful. The particulars of each solution component will be explained further below.

These core functionalities serve as the foundation, supplemented by ongoing research and development initiatives exploring the implementation of next-generation tools and third-party integrations.

The purpose of this report is to detail the achievements of the past year and outline our Platform development roadmap for 2024, with an emphasis on four key items:

  • The Internet of Knowledge — a unique approach to extending the applicability of decentralized AI in the immediate term, and paving the way for the emergence of decentralized AGI;
  • Integrations between the SingularityNET Platform and other decentralized networks: HyperCycle, NuNet, Cardano, Dfinity (ICP), and others, all part of the process of moving toward a next-generation cross-chain decentralized AI ecosystem;
  • Scalability and Usability improvements, including large new features in basic areas like hosting and billing;
  • Adoption through targeted initiatives like Deep Funding, SingularityNET ecosystem spinoffs, and more.

Throughout 2023, the Platform underwent architectural changes and other strategic enhancements aimed at refining and solidifying its development trajectory and aligning it with our Beneficial General Intelligence (BGI) plans, bringing us closer to our shared vision: a full-scale modernized AI Platform.

Table of contents:

· Platform Strategy and Roadmap for 2024
· The Internet of Knowledge — A Distributed and Decentralized Metaframework
· Deployment of Knowledge Nodes and Model Nodes on The Platform
·
Decentralized AI Deployment Infrastructure-as-a-Service
·
AI-DSL and Unified API for The Internet of Knowledge
·
Knowledge Node Assembling Deployment Toolkit
·
Neural-symbolic MeTTa-based Framework for AI Orchestration and LLM Tooling For Zarqa
· SingularityNET Platform Assistant
· Decentralized Collaborations on the Platform Architecture and Vertical Tech Stack
· ICP Integration and Decentralized AI Marketplace Deployment
·
HyperCycle: Steps Toward a Fully AI-Customized Decentralized Software Stack
·
NuNet AI Model Hosting for the SingularityNET Platform
·
Custom AI Developments for the Enterprises
·
Accelerating Progress on Cardano Integration

· Scalability and Usability Improvements, Smoothing the Path to a Decentralized AI Future
· Improving the Onboarding Experience
· Development of Text User Interface (TUI)
·
Improved Technical Documentation
·
Streamlining the Onboarding Process and Publisher Experience
·
Facilitated Service Development and Deployment Automation
· Improving Key Components: CLI, SDK, Daemon
· Progressed with Daemon, CLI, and SDK Transformation
· Developing Zero-code AI Model Training
· SingularityNET Token Bridge

· Driving Platform Utilization in 2024
· Deep Funding
· Deep Funding Platform Development
·
Deep Funding Request For Proposals (RFPs)
·
Growth Strategy
·
How to Get Involved?
· SingularityNET Ecosystem Spinoffs and incubating initiatives
· Rejuve.AI
·
Jam Galaxy
·
Mindplex
· Domain-Oriented AI Metaservices With AI Training Capabilities
· Scaled Image Generation Metaservice
·
Controllable Speech Synthesis Metaservice
·
Text Generation Training Metaservice
· Conclusion

Platform Strategy and Roadmap for 2024

All released information represents SingularityNET’s current intent, is subject to change or withdrawal, and represents only goals and objectives.

The Internet of Knowledge — A Distributed and Decentralized Metaframework

Deployment of Knowledge Nodes and Model Nodes on The Platform

The concept of the Internet of Knowledge involves the separation of ML models and knowledge containers through their implementation in the form of decentralized nodes in order to introduce a synergetic network that makes knowledge interoperable by the ecosystem of ML models for creating efficient AI metaservices. In such a network, Knowledge Nodes can be domain-, task- and identity-dedicated, being dynamically updated and cross-utilized by ML models represented as Model Nodes. It opens a wide range of possibilities for different academic research, software development, data operation, business-dedicated, socially-motivated, idea-driven, and just creative teams without specific AI development skills to efficiently contribute to the evolution of beneficial AI.

The Internet of Knowledge can contain static nodes aggregating the golden standard knowledge and best practices or, on the other side of the spectrum, contain highly dynamic domain or subdomain representation with real-time updates. Knowledge nodes can support different modalities and multimodal knowledge, contain both declarative and procedural knowledge, be focused on implementation details and real-world scenarios, have different designs, implement various databases, and be supplemented by various retrieval subsystems for better task-driven optimization. Each node has its own technical specification for its operation, submitting a specifically formulated task or entity identifier to the control stack.

It should be emphasized that knowledge representation is most effective in the form of a graph when a system of connections is specified between data units. The described set of tools simplifies the creation of knowledge nodes, providing the user with a set of tools for formatting and organizing data in the form of a graph structure, as well as a set of universal extensible interfaces, both for interaction with AI agents, as well as with external systems. The use case is as follows: the user, using a toolkit in a declarative style, describes the desired parameters of the node, and describes the “contract” for the data structure, after which the automatic deployment process is launched, as a result of which the user has their own knowledge node, completely ready to work in the decentralized Internet of Knowledge.

A Knowledge Graph is a semantic network that visualizes entities and the relationships between them. The information represented by the Knowledge Graph is stored in a graph database. An entity is a real object such as an event or a person. In a Knowledge Graph, these real objects are represented as nodes. Each node/entity is related to other nodes/entities. The relationships are represented by edges — connections between the nodes. Edges can, for example, have the meaning “is part of,”,= “works at” or “has properties”. By connecting the nodes via the edges, i.e. the relationships between the individual objects, a knowledge network is created.

The graph database stores Knowledge Graph’s nodes and edges as well as the associated metadata. The nodes and edges of a particular graph can be described by universal methods of graph representation, which allow, on the one hand, to describe complex and structured data, and, on the other hand, to maintain a flexible and extensible model for representing information. An important aspect of creating a Knowledge Graph is the semantic enrichment of the data. This means that the data is enriched with additional information that more precisely describes the meaning and relationships between the nodes and edges.

The Knowledge Nodes in collaboration with Model Nodes create a powerful framework, capable of acting in different modalities in order to facilitate cognitive synergy. In turn, Model Nodes which can be represented by symbolic and neural-symbolic algorithms and deep learning neural models such as GAN, Stable diffusion, and Transformer, can benefit from knowledge access in the process of their evolution, not being constrained to use relevant knowledge in inference time but also to reuse it for retraining setups, which represents the ability to federated and continual learning for the system as a whole.

We also apply Integration Nodes wrapping popular frameworks for building and deploying dialogue agents such as LangChain, AutoGPT, AutoChain, and others in order to aggregate and synergize the best practices evolved in various developer communities and simplify integration to the Internet of Knowledge into the SingularityNET Platform.

Ultimately, these factors position SingularityNET to become the Knowledge Layer of the Internet in the AI era. Knowledge Nodes and Model Nodes deployed on the blockchain allow, through the use of smart contracts, to interact with AI agents, thereby achieving all the main advantages of functioning in a decentralized network and accelerating the evolution of beneficial AI.

Decentralized AI Deployment Infrastructure-as-a-Service

AI developers can focus on what they do best and AI customers can be confident that the services they want will have high uptime, robustness, and performance and will be deployed and managed in secure, scalable environments. This is the promise of our AI-infrastructure-as-a-service (IaaS) we plan to offer for a fee or share of revenues, similar to how app stores have simplified the mobile app economy for users and developers. Our IaaS tools will play the role of similar tools by platforms such as AWS and Azure, but with the following design goals tailored to the needs of networked AI:

  • Optimize for the computational requirements of training and deploying machine learning models. This goes beyond deep neural networks and GPU usage and considers graph processing, multi-agent systems, dynamic distributed knowledge stores, and other processing models needed to allow the emergence of networked AGI.
  • Support processing of stateful services, which currently represents a challenge in cloud platforms but is necessary for many tasks, such as those of conversational agents, task-oriented augmented reality, personal assistants, and others.
  • Provide different runtimes and environments for deploying the model, at the checkpoint and code level, at the container and VM level.
  • Autoscale the load of services in serverless, an event-driven architecture that allows users to scale models depending on the load;
  • Include secure support for public, private, and hybrid cloud deployments (public–private mix and edge–cloud mix).
  • Dynamically optimize compute locations to maximize compute and data proximity, improving performance and reducing bandwidth costs. We will leverage critical open-source technologies such as Kubernetes and CloudStack and support the deployment of our IaaS solution both on top of existing cloud platforms (where we make optimal use of built-in tooling) and bare metal data centers. One key consideration is using cryptocurrency mining hardware to train AI models and long-running AI reasoning and inference tasks.

AI-DSL and Unified API for The Internet of Knowledge

AI-DSL is a powerful tool that offers a convenient and universal standard for describing the interfaces and interactions of AI services. From the developer’s point of view, they will not be exposed to the Protocol Buffers (Protobuf) anymore by default. Instead, the developers will enter their service descriptions directly in AI-DSL. Next, the Platform SDK modules will automatically generate the Protobuf file required for the technical components of the Platform. It should also be noted that the development strategy also plans to add support for not only gRPC services but also REST API, which is determined by the planned support for translating AI-SDL into REST API. This will significantly expand the capabilities of the Platform for AI developers and ensure a high-quality level of compatibility.

Developing tailored AI algorithms that can solve real-world problems has been tedious, expensive, and time-consuming. The implementation of AI-DSL as a part of the Platform SDK is paving the way for users to access all the AI services they need in a single place. These self-assembling workflows will replace the current labor-intensive process for creating specialized, one-off AI processes. This protocol will create a universal mode of AI intercommunication and collaboration, making the benefits of complex AI processes accessible at scale.

The next stage in the implementation of the Internet of Knowledge is the development of a universal API for services related to both knowledge graphs and LLMs representing Knowledge Nodes and Model Nodes respectively. This standard is aimed at ensuring the interaction of the Platform with the Internet of Knowledge ecosystem through the different Platform components like MeTTa (a purpose-built language for cognitive computations) SDK, Python SDK, Javascript SDK, MeTTa-Motto modular library (a package that provides interoperability of LLMs and a variety of AI/ML Models with Knowledge Graphs, Databases, reasoning algorithms, etc.).

Knowledge Node Assembling Deployment Toolkit

The Knowledge Node Assembling Deployment Toolkit is an advanced tool designed for creating and deploying generic modules called Knowledge Nodes that provide universal approaches to knowledge representation. These modules are capable of interacting with ML models and the MeTTa SDK, i.e. with AI agents in the Internet of Knowledge layer. They are designed to store and provide knowledge and contextual data to improve the quality of AI services.

When configuring a knowledge node, the user describes a data contract, according to which they will fill the node with “knowledge data”. A data contract is a declarative configuration specifying the format, structure, and metadata of knowledge data. It serves as a blueprint for the system, guiding the interpretation and integration of diverse data sources into a cohesive graph. This contract defines the semantics, relationships, and properties of entities, allowing the system to harmonize disparate data elements into a unified knowledge representation.

By adhering to the data contract, the system can ensure consistency and interoperability, facilitating the seamless construction and enrichment of the Knowledge Graph from heterogeneous data sources. Knowledge and contexts are subsequently retrieved from the knowledge node through searching and navigating a graph, which involves querying for specific information or exploring relationships within the graph. Navigation includes traversing the graph by following edges to discover related entities and uncovering contextual information. Data enrichment and the use of various types of metadata enhance the precision of searches, enabling the knowledge node engine to explore complex interconnections and derive insights from the rich, interconnected data structure.

Conceptually, the core of a knowledge node is based on a graph database with the ability to expand with additional data storage as needed. The toolkit allows users to connect universal interfaces to interact with the node. This is a control interface, an interface for interacting with AI agents, and an interface for connecting external systems, e.g., for updating the Knowledge Graph from the outside. The user can implement their own data collection and processing system, or use their own data warehouse and connect it to a knowledge node to dynamically update the Knowledge Graph using both a webhook mechanism or a REST API.

The SingularityNET team is currently working on a comprehensive example of such a Knowledge Node service to streamline further development. The service will include a dynamically updated graph database and provide access to an extensive data set of scientific and technical texts in AI and machine learning domain, as well as related technical domains. To implement the prototype, a database of technical articles with a total initial volume of about a million paragraphs, dynamically updated from several sources, will be used. Scientific and technical texts and their metadata will be transposed into a graph structure, which will allow them to be linked with each other by a system of connections based on a variety of metadata, and, ultimately, provide highly relevant contexts for queries in the field covered. Such a service will serve as a reliable support for AI services and will serve as a good example confirming the usefulness of a Knowledge Node as an element of the Internet of Knowledge.

Neural-symbolic MeTTa-based Framework for AI Orchestration and LLM Tooling For Zarqa

Disruptive neural-symbolic architectures developed by Zarqa, a novel venture from SingularityNET, have been mobilizing our engineering expertise in solutions based on scaled neural-symbolic AI. Zarqa is aimed to create a pioneering and a far more powerful next generation of LLMs, capable of disrupting any industry by merging symbolic reasoning and dynamic Knowledge Graphs with the power of large-scale generative AI based on deep neural networks, resulting in unparalleled conversational and problem-solving capacity.

Neural-symbolic MeTTa-based Framework for AI Orchestration is aimed at achieving efficient hybridization via integrating LLMs into a neural-symbolic cognitive architecture. This architecture includes a metagraph-based knowledge storage, a cognitive programming language, developed by SingularityNET, called MeTTa, programs that are the content of the storage and can represent declarative and procedural knowledge, queries to this knowledge including programs themselves (implying full introspection), reasoning and learning rules and heuristics, as well as can perform subsymbolic operations, in particular, by processing information with the use of neural network modules.

The core framework allows for implementation in MeTTa of different inference and reasoning paradigms, such as Probabilistic Logic Networks (PLN) being implemented. The MeTTa language design enables high granular interoperability of reasoning steps and neural information processing operations, which provides features such as storing information into the knowledge base that is produced by generative neural networks in response to user’s queries (one-shot external memory for LLMs), verification of LLMs output via querying symbolic ontologies (hallucination mitigation), LLM conditioning on the Knowledge Graph content, and chaining of LLM inference steps controlled by or altered with symbolic knowledge-based reasoning.

AI services orchestrations and calls are one of the most relevant areas of development, especially in the context of the Platform. Taking into account the multitude of AI services, as well as the organization of the Internet of Knowledge, it should be noted that individual requests to a specific service become redundant and unproductive.

The idea of this direction is to create a system for converting requests in a formal language using LLMs, as well as defining several individual requests to AI services in its context. Thus, the Neural-Symbolic MeTTa-based Framework for AI Orchestration allows for optimized and aggregated calls to a number of AI services, as well as Internet of Knowledge nodes, to process one formal request, which is in fact multi-component.

SingularityNET Platform Assistant

The rising number of services on the SingularityNET marketplace highlighted the need for an automated solution to ensure user comfort and efficient navigation. As a response, we are currently developing an intelligent SingularityNET Platform Assistant, initially envisioned as a chatbot for service-related inquiries. However, further exploration revealed the potential for more comprehensive and innovative functionalities.

The development will prioritize a phased approach, initially introducing a chatbot to answer service-specific and Platform-wide questions. It will leverage existing documentation and development processes to assist users seamlessly. The long-term vision encompasses additional functionalities, such as:

  • Onboarding support: Guiding new users through the Platform’s features and operations;
  • Automated code generation: Streamlining specific tasks by automatically generating code for AI models wrapping, data processing, etc.;
  • Integrated service calls: Enabling users to interact with Platform services directly within the chatbot interface.

The Assistant’s technical foundation harnesses the Internet of Knowledge SingularutyNET Platform ecosystem and cutting-edge technologies like MeTTa-MoTTo, which provides interoperability of LLMs with Knowledge Graphs and reasoning. This innovative approach ensures a robust and adaptable foundation for ongoing and future development.

Decentralized Collaborations on the Platform Architecture and Vertical Tech Stack

ICP Integration and Decentralized AI Marketplace Deployment

We are collaborating closely with Dfinity to leverage the strengths and capabilities of the SingularityNET Platform and the Internet Computer Protocol (ICP) to advance Decentralized AI infrastructure and bring AI-based services to all dApps building on the ICP. This initiative aligns with our shared mission of democratizing AI technology by enhancing Platform functionality and user experience. It also complements our partnerships with other entities such as Input Output Global (IOHK) from the Cardano ecosystem, and our work with HyperCycle to create our own unique Layer 0++ blockchain framework, this Dfinity collaboration exemplifies our commitment to a cross-chain approach to decentralized AI.

A central aspect of the collaboration with Dfinity is the joint development of a decentralized AI Marketplace hosted on the ICP network. The SingularityNET marketplace serves as a hub for accessing and interacting with AI services on the Platform. It offers extensive functionalities such as test requests for services, purchase of services, and exploring and understanding service capabilities. By decentralizing this crucial component, we move closer to achieving complete decentralization and fostering an even broader distribution of AI services and development activities.

To achieve this, we will introduce a universal template for building user-friendly AI service interfaces that can be used by both service providers and users to create UI interfaces with a broad range of functionalities, including web3 authorization, wallet connectors, and, as a result, payment services. This solution aims to significantly reduce the complexity of interface development and streamline the distribution and integration of AI services.

As part of this collaboration, we are considering the creation of pre-built templates for frequently used AI services. These templates would offer standardized UI elements and features specifically designed for specific service categories.

Building upon our commitment to expanding options and platforms for hosting AI services, we are actively exploring the potential of ICP to host AI services. This feasibility study aligns with our vision for modern AI development, emphasizing scalability, dynamic resource redistribution, and greater accessibility.

Concurrently, we are testing the possibility of hosting AI models and Platform services in ICP canisters. Our research aims to answer key questions such as types of AI models suitable for ICP hosting, performance considerations, and model size limitations.

HyperCycle: Steps Toward a Fully AI-Customized Decentralized Software Stack

HyperCycle is building the essential missing components required for the Internet of AI, where AI agents with complementary capabilities can seamlessly transact, enabling them to collectively tackle problems of ever-increasing size and complexity, empowered by agent-to-agent microtransactions for microservices with sub-second finality.

The TODA protocols provide the essentials. The ledgerless TODA/IP consensus protocol minimizes network traversal and performs only the minimum computation necessary to secure the network itself. Building on TODA/FP, Earth64 Sato-Servers ensure that their assets are secured independently of any blockchain ledger.

Beyond the core functionality, HyperCycle offers vast possibilities. SingularityNET envisions an AI marketplace where humans and machines can transact, advancing machine intelligence and paving the way for true AGI. Additionally, marketplaces for high-performance computing can leverage HyperCycle as a service, supporting advancements in various fields such as science, medicine, and technology.

The synergistic combination of AI services, distributed registry, solution architecture, node software client, and the ability to run a node on several computing devices open up significant opportunities. Building upon this foundation, we are exploring the integration of HyperCycle with our Platform, focusing on several promising avenues.

The first avenue would be hosting models on HyperCycle. Establishing a single, streamlined framework for service registration, hosting, and calling will enable service providers to efficiently manage their AI models within a decentralized environment, potentially leading to increased user activity, improved service development, and optimized resource utilization.

While decentralized AI model hosting on HyperCycle empowers efficient management and utilization of AI services, the true game-changer lies in its embedded blockchain technology. The blockchain serves as a trustless bridge, facilitating secure and transparent financial transactions with token conversion. This eliminates concerns about centralized control and ensures fair value exchange, creating a modern billing system based on trust and transparency.

Beyond billing, integrating HyperCycle with key blockchain components at the Platform level holds immense potential. This can lead to lower commission fees, faster data processing and exchange, and maximized performance and security. Moving the core logic of smart contracts to HyperCycle promises unprecedented speed and security for user interactions.

Currently, our team is meticulously researching the optimal architecture and interaction model to achieve the perfect balance between speed, security, and user experience. This research paves the way for an even more advanced future: the potential to create smart contracts within the HyperCycle network on MeTTa (Meta Type Talk). This revolutionary concept is poised to bridge the gap between AI and blockchain development, opening doors to a new era of AI-powered smart contracts.

NuNet AI Model Hosting for the SingularityNET Platform

NuNet is building a globally decentralized computing framework that combines the latent computing power of independently owned compute devices across the globe into a dynamic marketplace of compute resources. This approach transcends limitations like physical location or device size, empowering users to discover and utilize the precise amount of computing power they need whenever they need it.

NuNet prioritizes environmental friendliness, a pivotal aspect in the ever-growing demands of AI and decentralized systems. These domains heavily rely on power consumption for developing, training, and hosting AI services. NuNet addresses this challenge by harnessing underutilized computing resources, effectively eliminating the need for additional energy expenditure. Imagine the impact of leveraging idle computing power across the globe instead of constantly building and powering new infrastructure.

One of our most important areas of collaboration with NuNet is synergizing the SingularityNET Platform with the decentralized NuNet ecosystem. This hinges on seamlessly hosting AI services within NuNet, unlocking numerous benefits for platforms and users.

By establishing this robust interaction, we aim to simplify and streamline crucial processes around AI service hosting significantly. Specifically, service providers will be able to host their offerings on NuNet effortlessly. This approach will allow us to dynamically adjust the number of required reuses at times of increased load on the AI service, regulate the speed of model training, and personalize many parameters.

As a result, we will streamline the entire onboarding process, eliminating the need for users to delve into allocated resources and their underlying context. Service providers will be able to seamlessly launch, train, register, and publish their models on the marketplace. Additionally, the internal billing system will cover all actions, simplifying the process from multiple payments (including separate ones for hosting) to just a few clicks. This is particularly beneficial for scenarios where clients train their models on their own datasets, as external actions and associated time/resource consumption are significantly reduced.

Custom AI Developments for the Enterprises

AI chatbots and goal-oriented dialogue systems have emerged as critical tools for enhancing customer engagement, streamlining support, and driving communication efficiency. SingularityNET offers a comprehensive framework for developing tailored AI solutions for enterprises, leveraging advanced machine learning and robust backend development.

To help organizations create AI-enabled business strategies, we begin by gaining a thorough understanding of their business model, customer service processes, and specific requirements to provide them with efficient goal-oriented dialogue systems and full-fledged cognitive architectures acting not only in text but also in speech, visual, time-series and action chaining domains with high performance at scale.

The effectiveness of an AI chatbot hinges on the quality and relevance of its training data and disruptive technical solutions for dynamic knowledge grounding in order to solve downstream tasks and keep contextual relevance through the whole dialogue chain. Historical customer interactions, FAQs, and domain-specific content serve as vital resources for training the generative models and supplying AI in inference time. Dynamically updated structured Knowledge Graphs and databases act as the backbone of the chatbot’s knowledge supply mechanism, enabling accurate and contextually relevant answers and leveraging the Internet of Knowledge ecosystem on the SingularityNET Platform.

Building upon the foundation of dialogue systems, SingularityNET offers solutions for call centers, leveraging speech models. These solutions present additional challenges compared to chatbots, including managing voice characteristics, tone, emotionality, pronunciation, literacy, pauses, speed of answers, and even accents. Additionally, speech recognition modules require integration for accurate request processing. By ensuring fast response times, high answer relevance, and maintaining a user-defined conversation structure, combined with anticipating user needs expressed implicitly, we strive to create a seamless user experience akin to interacting with a competent, helpful human specialist.

Our experience in developing end-to-end AI solutions for humanoid robotics for Hanson Robotics (Sophia robot), Awakening Health (Grace robot), and Yaya Labs (Desdemona star-robot) each with unique characteristics and abilities and capable of dialog with humans, represent consistent artificial personality, leverage domain-specific knowledge, demonstrate the power of AI to deal with real-world complexity, utilize multimodal perception and introduce advanced compositional behavior control.

Accelerating Progress on Cardano Integration

In November 2023, SingularityNET successfully launched AGIX staking on the Cardano blockchain, marking a significant step forward in the migration process of our decentralized AI Platform. This feat required extensive technical expertise, including in-depth analysis, high-quality architecture design, and careful consideration of Cardano’s specific features and the logic of constructing smart contracts in Plutarch, a typed eDSL in Haskell for writing efficient scripts.

The development of AGIX staking has allowed us to significantly increase competencies in the development of decentralized solutions specialized for the Cardano blockchain. Considering that staking is a financial solution, this has also prompted us to think through the concept of transferring the Platform to the Cardano blockchain.

It should be acknowledged that this process will likely be implemented in stages due to the inherent complexities associated with supporting multiple blockchain networks within the Platform. The key processes that involve interaction with the blockchain network and hold the most significance are:

  • Registration of new service providers and their services;
  • Creation and management of payment channels.

As the initial step toward enhancing user experience on the Platform, we identified the transfer of the payment logic used for service utilization. To achieve this, the team conducted a thorough analysis of existing financial projects on Cardano, searching for projects that could offer similar functionalities and potential integration options into the Platform. However, due to the complexity or lack of necessary functionalities within existing solutions, the Platform team has developed an alternative version of the smart contract architecture and structure specifically tailored to implement the required functionalities for Cardano.

Beyond the development of smart contracts (both on-chain and off-chain components), it will be necessary to develop logic for supporting the Cardano network at the Daemon level, which serves as a key technical component for Platform and SDK navigation. Additionally, integration support for Cardano and a selection of new options needs to be implemented within the Marketplace front end.

The second stage focuses on the transfer of registration logic for new organizations and services. This stage will not only involve smart contract development and environmental modifications but will also require updates to the Daemon and SDK. This stage presents a greater level of complexity, as it necessitates avoiding data storage collisions, ensuring high-quality and high-speed support for interaction with data from various blockchain networks, and guaranteeing uniformity of stored data across all networks.

Further actions will be determined based on a detailed design for the complete migration of the SingularityNET protocol Marketplace to the Cardano blockchain. We anticipate progress on this endeavor throughout 2024, aiming to leverage the robust decentralization, security, and strength of the Cardano network for the entire universe of SingularityNET AI agents.

Scalability and Usability Improvements, Smoothing the Path to a Decentralized AI Future

Improving the Onboarding Experience

Using an innovative high-tech product like the SingularityNET decentralized AI Platform delves into numerous intricate areas, demanding users to grasp various structures, principles, and mechanisms — whether already implemented, currently in development, or planned for the near future. We acknowledge the learning curve inherent to such solutions.

However, simplifying the process of understanding the Platform is a core priority. We have been actively working to make publishing, using, and integrating services on our Platform a streamlined experience. This comprehensive process involves multiple components and steps, and we are proud to say we have successfully implemented key improvements throughout 2023.

Specifically, we introduced a series of pre-configured boilerplates and templates to support developers further. These ready-to-use resources help streamline the integration process of AI services into several applications and systems, saving developers valuable time and effort.

Development of Text User Interface (TUI)

We made significant progress in developing a Text User Interface (TUI) written in PyTermTK to provide a more user-friendly developer experience on the Platform. This innovative interface is designed to streamline the onboarding process for AI developers, particularly those with limited system administration experience. By providing a more accessible and user-friendly entry point, the TUI aims to democratize access to the Platform’s powerful capabilities.

Improved Technical Documentation

We understand the critical role of clear and comprehensive technical documentation in user experience. This is why we have been actively revamping the Platform package, including codebase, documentation, and guidelines for both the Publisher Portal and Developer Portal.

Our collaborative approach involves working hand-in-hand with AI Developers during onboarding to pinpoint and address pain points directly. These improvements will enhance the clarity and user-friendliness of the documentation and serve as a dynamic knowledge base for the SingularityNET Platform Assistant, which is currently in development.

Streamlining the Onboarding Process and Publisher Experience

We are committed to providing service providers and onboarders with a seamless and efficient Platform experience. Here are some key improvements we have made to the Publisher Portal:

  • Test Network: Enhanced interaction with the test network to ensure crucial functionalities for service testing during the upload process to the Platform. This involved replacing the testnet and debugging interactions with the daemon.
  • Improved front-end: Actively addressed bugs and tested front-end functionalities to eliminate instances of incorrect portal behavior.
  • User Experience Focus: Removed specific friction points identified through user interactions, ensuring a smoother experience.
  • Comprehensive Documentation: Revising documentation and guidelines based on support experience and developer feedback, aiming for increased clarity and comprehensiveness.

Facilitated Service Development and Deployment Automation

We started implementing service wrapping automation and event-driven solutions for easy AI model deployment on the SingularityNET decentralized AI Platform. This means we can significantly simplify the service wrapping process, reduce costs, and scale AI models more efficiently.

Previously, service providers had to maintain their own or third-party cloud servers to run AI services. With the use of event-driven service architecture, instances will only be used when accessed by users. This eliminates the risk of incurring costs without actual usage of the AI service. We can now scale GPU models from zero to the limits set by the provider and back again.

Onboarders can diversify their model hosting and communication practices to host AI models with serverless GPU vendors and leverage modern practices for operator-based models in Kubernetes and other AI cluster management tools.

The solution has been implemented to 15 various AI services on the Platform for testing, and a detailed guideline for reimplementing this approach is being prepared for release.

Improving Key Components: CLI, SDK, Daemon

The seamless onboarding and user experience of our decentralized AI Platform hinges on three key technical components: Daemon, CLI, and SDK. Each plays an essential role for:

  1. service providers deploying and managing their AI services;
  2. integration clients seamlessly embedding these services into their developments, and;
  3. end users who use services for personal purposes without integration requirements.

In particular, the first stage of work included dividing the code base into separate repositories (for CLI and SDK), as well as refactoring some of the components within the SDK to logically and physically separate components that are used in different application contexts and make it more flexible. We then moved the SingularityNET smart contracts layer to separate Python packages to support up-to-date versions of all contracts. This has resulted in a more advanced architecture design that allows for ease of development of wrapping service calls and developing wrappers over service calls.

As for the Daemon side, we are actively working on some Infura interaction improvements and polishing of user experience in the context of providing training as a service. Also, the nearest plan includes Daemon setup and deployment mechanisms simplification and automation, decreasing the number of required configurable parameters for setup by passing the defaults (still can be configured with custom values but not required for most common setup cases).

Our dedicated development team ensures uninterrupted operation, consistent updates, modern architecture, and ongoing support. This meticulous approach guarantees uptime, security, performance, and the addition of new functionalities. The result is a simplified onboarding, an intuitive user experience, increased confidence in Platform reliability, unlocked innovation for service providers and clients, and broader accessibility for diverse users worldwide.

Progressed with Daemon, CLI, and SDK Transformation

We have started development to automate the deployment of Daemon modules (a core component of the interaction between an AI developer’s service and the Platform) and simplify the interaction with the SingularityNET CLI.

We are moving toward a global onboarding methodology and multi-platform tools and simplifying the interaction with these tools. We are also working on separating the CLI from the Python SDK while concurrently updating the JavaScript SDK.

To make the work of service providers more convenient, a new way of interaction between the Daemon and the service using HTTP requests has been implemented in the Daemon. Previously, connection to the Daemon was limited to gRPC, json-rpc, and process (threaded interaction with the running provider service). However, service providers now have options for GenerativeAI Nodes deploying their services on hosting based on serverless GPU computing. This functionality is currently being tested and debugged.

Also, new functionality was implemented for service providers to train models. When it comes to interacting with the daemon, this allows service providers to activate an additional model training service based on the provided data dataset. The functionality is being tested and improved to meet technical requirements.

Developing Zero-code AI Model Training

To make our new Internet of Knowledge functionalities as broadly as accessible as possible, we have implemented new features for service providers to train and deploy custom AI models on our decentralized AI Platform, including zero-code training. Service providers can seamlessly activate an additional model training service based on the provided data set as part of their interaction with the Platform.

To provide such innovative functionality, we leverage advanced architectures and optimization techniques. This ensures optimal utilization of resources, enabling rapid model training and deployment and ensuring scaling in inference.

This is why we will be introducing the concept of Training-On-Platform (TOP). As a pilot, we have equipped three highly sought-after domain-focused implementations with this new capability: :

  • Speech synthesis service with training support based on the provided unique voice sample;
  • Image generation service with training support based on the provided image collection;
  • Textual generative conversational AI service based on LLM fine-tuning using custom data.

This showcases the TOP feature and its advanced functionalities through top-tier, pre-trained AI services available in our Marketplace, and makes TOP accessible to diverse audiences, including developers with deep technical expertise and AI designers and customizers with less technical experience.

This approach not only expands the potential user base but also makes AI development more engaging and accessible, opening doors for both technical and non-technical users.

SingularityNET Token Bridge

The SingularityNET Token Bridge constitutes an important supporting technology service. and an important focus area for our tech teams.

Facilitating the seamless and secure transfer of data and tokens between different blockchain networks underpins the very idea of a “decentralized” AI marketplace, promotes interoperability, and opens up new opportunities for users to access a broader range of network-specific features and capabilities.

As the crypto space continues to evolve, this bridging mechanism will contribute to the growth of our ecosystem projects and third-party partners, allowing them to build scalable cross-chain applications while upholding the spirit of decentralization.

In this line, our tech teams have undertaken a series of initiatives in 2023, notably:

  • The development and optimization of the interaction processes with select blockchain networks.
  • Support for a number of SingularityNET ecosystem tokens.
  • Implementation of non-ecosystem tokens.

The choice of which blockchains to support is a strategic decision and one that requires careful consideration of their differences and distinct benefits and more specific parameters such as data processing speed, the volume of data processed, project focus, user comfort, simplicity, convenience, and security and technological solutions underlying each blockchain network

At the time of this writing, the SingularityNET Bridge supports our native token AGIX, NuNet’s utility token NTX, Rejuve.AI’s utility token RJV, and Cogito’s governance token CGV. Specifically, it bridges the gap respectively between the Ethereum and Cardano networks and the Ethereum and Binance Smart Chain networks:

Currently, separate front-end interfaces exist for Ethereum-Cardano and Ethereum-Binance Smart Chain conversions. An upcoming update will introduce a unified interface, replacing the two separate portals with a single, intuitive platform. This consolidation aims to significantly streamline the user experience and make cross-chain interactions a breeze, regardless of technical expertise. Expect easier navigation, clearer information, and an overall smoother transfer process.

The development team is not stopping there. We are actively working on adding support for BSC-Cardano conversions and optimizing the speed, security, and efficiency of token conversion processes.

This continuous expansion further bridges the gap between previously isolated ecosystems and creates a more interconnected and efficient decentralized AI ecosystem.

Driving Platform Utilization in 2024

Deep Funding

Deep Funding (DF) is a community-driven program that has been created to support the SingularityNET Foundation’s mission of driving the development of a decentralized, democratic, inclusive, beneficial Artificial General Intelligence. The main goal of Deep Funding is ‘To help the Decentralized AI Platform grow’.

In february 2024, DF has funded 80 diverse projects in 3 full Rounds and (recently) one Test round, pledging a total of $2,869,608 in AGIX to projects elected by the Deep Funding Community. This includes the development and usage of AI services on the decentralized AI Platform as well as and promotional initiatives and tooling contributing to the SingularityNET Platform’s adoption, growth, and success.

Deep Funding is also taking a lead role in SingularityNET’s decentralization efforts. The community not only decides which projects get awarded but is also deeply involved in the decision-making and operational management of the Deep Funding program. With a fast-growing program, the total organization of DF consists of only 2 regular contracted FTEs and more than 20 community members in charge of important tasks such as Marketing, Milestone reviews, community engagement, etc.

In this section, we delve into the ongoing development of the DF platform, explore what motivated the introduction of SingularityNET Requests For Proposals (RFPs), highlight the importance of community building, and explain how you can get involved.

Deep Funding Platform Development

In mid-2022, Deep Funding started with a partnership with a ‘best of breed’ provider of a democratic ideation portal. While the initial progress was very encouraging, the development pace of the partner slowed down significantly, which resulted in a widening gap between the desired feature set and the delivered functionalities.

At an early stage DF decided create their own frontend to the portal, in order to make the program less dependent on a third-party tool. The development was steadily continued until an ‘MVP’ stage of an in-house developed portal was within reach. In the June 2023 Town Hall, the community opted to have one more round (Round 3) on the existing platform while the development team continued to build a proprietary solution. This enabled the team to refine and further augment the platform with additional features.

Earlier this year, the new codebase was successfully deployed on deepfunding.ai and awarded projects and teams have been seamlessly migrated. At the moment of writing, the first Test Round on the new platform with 15 whitelisted teams has just been completed successfully. The feedback was very positive. No major issues have been found, and the experience has led to greater confidence and a number of concrete improvements.

For a full overview of the current status and the roadmap of the DF portal, please read this blog.

The main existing highlights of the new Proposal Portal are:

  • A flexible but structured step-by-step way for proposers to submit their proposals, including (required) milestones, budget calculations, and team information;
  • The ability to invite team members and have them create their own profile on the platform;
  • Structured overviews of proposals and awarded projects, with sub-pages including all project details;
  • The start of an RFP (Request for Proposal) section and a process that will enable the community to ideate on and co-create RFPs;
  • Feedback options such as comments with hierarchical levels and a sophisticated ‘reviews and ratings’ section;
  • After each round, all data from awarded teams will be retained, kept up to date, and stay easily accessible.

Main things to add before Deep Funding Round 4 (DFR4):

  • Integrated notification system on new feedback, reactions, and other communication by mail and on the platform;
  • Scheduling options for all round-phases and visibility of the status, countdowns, etc. on the portal;
  • A number of smaller visual and structural improvements poised to make the overall experience better for proposers and the community.

Things we will continue to work on during and after DFR4:

  • Automated distribution and presentation of ‘progress surveys.’ These are short surveys sent out to the teams, so the community can stay informed on their process and signal situations where support is needed;
  • Backend workflow automations, such as automated contract creation and milestone approval process;
  • Building out the tools and processes to support (community-driven) RFPs;
  • A separate WordPress environment for presenting community information around Events such as Town Halls, News, Operational Circles, and more.

Deep Funding Request For Proposals (RFPs)

Deep Funding Round 3 saw the introduction of the “SNET RFPs” (SingularityNET Requests For Proposal) pool, reflecting our commitment to deepen our collaboration with the SingularityNET community, external partners, and the global open-source community.

The RFP format allows more control and direction of outcomes than a generic pool has. This has many benefits:

  • The SingularityNET development teams can define specific desired features that align with the Platform strategy and complement the internal development process, including SNET technologies such as MeTTa, DAS, and AI-DSL;
  • The Deep Funding staff can define specifications for tooling that support the Deep Funding program and our main goals;
  • The community and Deep Funding staff can use the RFP process to develop targeted solutions, such as frameworks that allow different teams to make their work interoperable instead of competing;**
    Partner organizations can use the RFP format to define conditions for specific solutions that will help their and our ecosystem, and support Deep Funding with grants, the engagement of their community, and new, functional services on and around the AI Platform.

** A good example is the ‘Content Knowledge Graph’ RFP that is being developed by several collaborating community teams. This system will enable multiple parties to create and manage content independently, creating a consistent and up-to-date repository that can be used for a wide range of educational and informational purposes. If successful, this will mitigate the risk of a scattered landscape of overlapping and, eventually, outdated partial data repositories.

Growth Strategy

Deep Funding’s impact extends beyond mere funding. We are actively creating a community of builders, a community where people will ideate, build, as well as educate, and support each other in the creation and utility of AI services on and around the decentralized AI Platform.

We aim for this community to be inclusive and supportive, enabling everyone to contribute to our goals using their unique capabilities and experience. While the funding stream is and will remain a catalyst for development, we aim for Deep Funding to be more than ‘just’ a way to distribute grants to promising projects.

Theoretically, DF in a mature state should be able to thrive even without this funding source, based on an active and supportive community with a shared purpose and a wealth of information and supporting tools.

In alignment with this ambition, we are working together with the community on several strategies that we believe will bring significant further growth to the program:

Partner engagement: We are in conversation with some partners on the possibilities of co-funding some pools and/or RFPs. This will not only bring additional funding, but it will also open up our program to new communities and additional use cases for our technology.

Hackathons: We are working on plans to organize Deep Funding Hackathons. The projects coming out of such a hackathon could then be turned into a Deep Funding proposal to get additional funding for the next development phase

Marketing efforts: Our new marketing circle, which has just started at the beginning of January, is full of ideas to increase awareness of Deep Funding, such as a Deep Funding Newsletter, ‘the project of the week,’ Better social media coverage, improved branding and visuals, rethinking Town halls and more

Community.deepfunding.ai: As already mentioned in the roadmap section, we are starting a separate web frontend for better information on community-driven events such as Town Halls, circle reports and updates, general news, etc. Instead of prioritizing these components over platform features, we will now be able to work on these in parallel, driving both areas forward at a faster pace.

This community site will be developed in conjunction with the SingularityNET Ambassadors and Supervisory Council so that we can create a global, aggregated event calendar, a global news overview, etc.

Regional, community-driven Town Halls: A Latin American, African, and hopefully also an Asian Town hall, bridging cultural differences and catering for local needs. We are exploring ways to make Town Halls more attractive and useful for the participants, potentially focusing more on educational content and coaching practices, thus turning them into an onboarding ramp for proposers.

How to Get Involved?

We invite everyone to be a part of this journey! Whether you are a seasoned AI developer, a passionate researcher, or simply interested in contributing to advancing this initiative (governance, processes, marketing, etc.), Deep Funding offers opportunities to make a real difference.

Learn more about upcoming rounds, discover how anyone can participate, and explore the diverse projects currently shaping the future of the SingularityNET Platform at our Portal site deepfunding.ai, our Telegram channel, or our LinkedIn page.

SingularityNET Ecosystem Spinoffs and incubating initiatives

Rejuve.AI

Rejuve.AI is at the forefront of blending AI and blockchain technology to redefine longevity. Focused on harnessing the power of decentralized intelligence, Rejuve.AI employs advanced AI to decode the intricacies of aging, offering insights that pave the way for extended healthspan and well-being. Its mission is to democratize access to cutting-edge longevity research and treatments, empowering individuals to take control of their aging process through informed, data-driven decisions.

AI for Personalized Health Insights: Rejuve.AI utilizes its internally developed Bayesian Net (BayesExpert) and Generative Cooperative Network (GCN) models, which operate on SingularityNET’s decentralized AI Platform. This collaboration allows Rejuve.AI to leverage the vast capabilities of SingularityNET’s infrastructure, enhancing its AI-driven approaches to health and aging.

These AI systems are made possible through crowdsourced models that our GCN puts together in the best combinations. The resulting decentralized AI models are trained on expansive datasets, including electronic health records, genetic data, and real-time health tracking, to generate predictive models and offer unprecedented precision in personalized health recommendations tailored to a person’s biological drivers of aging.

Healthspan-Boosting Mobile App: Central to their platform is the Longevity App, a tool that integrates cutting-edge AI to provide users with a comprehensive understanding of their health and aging process. It uses predictive analytics to offer actionable insights, enabling users to make informed decisions that positively impact their healthspan.

Rejuve.AI seamlessly integrates blockchain technology into its ecosystem, offering several key benefits:

Empowered Data Ownership: Rejuve.AI employs blockchain technology for the secure and unalterable recording of health data, enhanced by our Data NFT (dNFT) framework. Individuals maintain sovereign control over their data, dictating access and usage, while our Product NFT (pNFT) system provides a transparent and verifiable record of contributions, ensuring clarity and trust in the longevity research ecosystem.

Decentralized Research Collaboration: Rejuve.AI leverages the decentralized nature of blockchain to enable secure collaboration between researchers and institutions, facilitating the sharing of data and insights without compromising individual privacy.

Tokenized Incentives: Rejuve.AI ensures data sovereignty and incentivizes users through tokenized rewards. Our ecosystem utilizes the RJV token, facilitating a transparent and fair exchange of value within our community. This token economy not only encourages secure health data sharing but also enables access to a range of health-related products and services at preferential rates, making longevity accessible to everyone.

Partner Network Access: Rejuve.AI is cultivating a network of longevity-focused businesses, giving individuals the power to utilize their RJV tokens for beneficial health products and services. This initiative not only makes longevity resources more accessible but also ensures that members can take informed, affordable steps toward enhanced well-being.

The Road Ahead:

Rejuve.AI is poised at the forefront of longevity research, moving forward with a commitment to harness AI’s potential responsibly and to empower individuals with control over their health data. With a keen eye on the ethical implications, Rejuve.AI is not just navigating the future; it’s shaping it. As they progress, their initiatives — from launching the Longevity App to democratizing data with their dNFT — promise to redefine our understanding of aging and open new horizons for a healthier tomorrow.

Jam Galaxy

Jam Galaxy’s mission to revolutionize the music industry transcends mere technological integration; it orchestrates a symphony of cutting-edge AI, blockchain, and low-latency audio streaming powered by the robust decentralized AI infrastructure of SingularityNET. This unique convergence addresses core challenges, empowers global collaboration, and unlocks a future where music thrives on seamless creation, production, and engagement.

AI Composition and Exploration:

AI Synthesizer: This innovative tool, hosted directly on the SingularityNET Platform, leverages transformer language models to translate textual descriptions into audio samples. Imagine conjuring a specific sonic atmosphere or instrument simply by describing it. This tool, drawing from self-supervised audio representation learning and sequential modeling, unleashes unprecedented creative control for musicians.

Digital Audio Workstation (DAW): The upcoming AI-powered DAW, integrated with the SingularityNET Platform, promises to streamline the entire music production process. Musicians can leverage AI assistance for arrangement, composition, and mixing, allowing them to focus on their artistic vision while the AI handles technical complexities.

Collaboration and Stem Exploration:

Music Demixing: This AI-powered song splitter, recently upgraded, unlocks new possibilities for collaboration and remixing. It seamlessly separates individual stems from songs, empowering musicians to isolate vocals, create backing tracks, or extract instrumentals for collaborative online jam sessions. This tool, hosted on the SingularityNET Platform, exemplifies the platform’s commitment to fostering meaningful collaboration across borders.

Decentralized and Fair Compensation:

Blockchain Integration: Jam Galaxy seamlessly integrates blockchain technology to ensure fair compensation and transparent transactions. SingularityNET’s decentralized nature empowers artists to retain control over their work and receive direct compensation for their creations, fostering a thriving and equitable music ecosystem.

Jamming the Future: Jam Galaxy’s vision, powered by SingularityNET’s AI and blockchain capabilities, goes beyond technological integration. It’s about building a vibrant community where artists and fans connect, co-create, and discover music seamlessly. By democratizing music production with AI tools and ensuring fair compensation through blockchain, Jam Galaxy offers a glimpse into the future of music — a future where technology empowers creativity and fosters a truly global musical experience.

The upcoming suite of advanced tools — including an AI-powered Digital Audio Workstation (DAW), AI synthesizers, drum generators, music loopers, and mixing/mastering tools — is set to be integrated into the backend of the SingularityNET Platform, promising to significantly streamline and enhance the music production process. Jam Galaxy’s dedication to improving musical sessions through innovative online jamming, bolstered by a robust infrastructure, is poised to provide a seamless and high-quality experience for both artists and fans.

These features represent a substantial advancement in integrating technology and creativity within the music industry. This approach emphasizes the platform’s focus on utilizing advanced AI to refine the processes of music creation, sharing, and consumption.

Mindplex

Mindplex is a SingularityNET media spin-off that uses AI and blockchain technologies to reinvent the media environment for a better future. Mindplex has three interrelated branches:

  1. Mindplex Media, a digital magazine/social media that explores radical new future technologies and their profound impacts;
  2. An AI /blockchain technology project that includes AI/blockchain tools for the media industry. Such as reputation systems and recommendation AIs, where other decentralized media/social media companies and businesses license the tools;
  3. A membership network (Mindplex Network) whose members collaborate to build a responsible media environment and content contribution while prioritizing healthy mental attention.

Mindplex’s ongoing initiatives align with SingularityNET’s objective, and most of its proposed feature developments will be utilized by SingularityNET as follows.

Mindplex’s focus on AI and blockchain tools customized for media/social media business complements SingularityNET’s expertise. For starters, Mindplex has begun the development of an AI and blockchain-based Reputation System, which will be integrated into the SingularityNET AI marketplace as a decentralized, merit-based Content and Content Creator Reputation Tool. Furthermore, the non-liquid soul-bound Reputation token called MPXR is aimed to be a major governance token, which can curb the issue of liquid governance tokens, and, in the later stage, other platforms and businesses can adopt it as one of their governance tokens. Similarly, it has development plans for Generative AI tools, Virtual AI characters, and a Collaborative Fiction Writing Platform (human-to-human and human-to-machine) with NFT components. Along with other similar tools curated for media/social media businesses, which all have AI and blockchain integrations at their core, will contribute by and large to SingularityNET’s AI Marketplace triggering more API calls and inter-community AI/blockchain development projects.

Mindplex’s community membership network encourages collaboration among members to build a better media environment. This collaborative approach can foster innovation and lead to the development of cutting-edge technologies that benefit both Mindplex and SingularityNET. One good example to highlight here as one of the upcoming Mindplex projects will be Mindplex’s Decentralized Impact Journalism, which is ‘solutions-oriented journalism’. In a nutshell, this will focus on rigorous and compelling reporting on responses to technological and social problems. Mindplex and SingularityNET will try to identify unexplored pockets of technology and its related impact on socioeconomic and political aspects of society, the environment, and the future of humanity. Through this approach, Mindplex will also amplify SingularityNET’s role in the creation of a beneficial Artificial General Intelligence (BGI).

Mindplex is also poised to become the perfect Cross-Promotion platform via its ‘Community Project’ section (to be launched in Q4 of 2024) on its digital magazine and social media platforms. Primarily, this section will create room for community members to engage with the upcoming projects of SingularityNET and its affiliates while its decentralized Reputation System curates the content and conversation.

Domain-Oriented AI Metaservices With AI Training Capabilities

Scaled Image Generation Metaservice

At the current level of technological advancement, the fragmented landscape of image generation tools can be a hassle for users, requiring multiple accounts, navigating usage limitations, and struggling with varying quality across tools. The Scaled Image Generation Metaservice emerges as a solution, streamlining the process of creating custom images through an innovative infrastructure based on a simplified AI-DSL.

Imagine integrating plugins into an LLM like ChatGPT but extending this concept to diverse interactions between different LLMs, symbolic components, and other services. With our Metaservice, users can leverage the combined power of multiple tools without being constrained by separate registrations, usage limitations, or inconsistent output quality.

At its core, the Metaservice aggregates outputs from diverse generative models and can provide the batched generation at scale. Users can preview and select the most suitable model based on their specific needs, eliminating the need to try multiple standalone services. The user interface caters to both technical and non-technical audiences, offering a traditional web interface, a Telegram bot, and programmable access through REST APIs.

The benefits of our Scaled Image Generation Metaservice extend beyond simplifying the image creation process. They include promoting efficiency by saving users time and resources, and offering a valuable solution for personal projects and professional presentations alike. Furthermore, the Metaservice fosters a dynamic ecosystem where new models can be easily integrated, and existing ones are better utilized. This creates a community of creators and users who benefit from collective advancements in image generation technologies.

The Metaservice has a modular architecture to combine image generation, stylization, impainting, and other tools utilizing advanced UI. The solution not only simplifies the process of generating custom images through a unified interface but also embodies the principles of collaboration, efficiency, and accessibility, with an underlying goal of enhancing the utility and usability of generative AI technologies for a wide range of applications.

Controllable Speech Synthesis Metaservice

The Speech Synthesis Metaservice offers a range of advanced features tailored to meet diverse user needs. One of its notable capabilities is efficiently onboarding new speakers for model training, facilitating rapid integration and adaptability. Utilizing cutting-edge algorithms and machine learning techniques, the metaservice swiftly adjusts to individual vocal characteristics, ensuring seamless integration.

Additionally, the metaservice offers rapid fine-tuning capabilities, enabling the customization of models using just 20 seconds of labeled speech data in a mere 5 minutes. Leveraging a Large Base model trained on a curated dataset of 300 clear speakers and specialized Speaker Embedding models, the system achieves exceptional performance in speaker adaptation. Notably, to ensure optimal speaker adaptation, the metaservice adopts a technique where the base model is frozen, and only the Speaker Embedding layer is further trained. This approach allows for fine-tuning specific to individual speakers while preserving the foundational aspects of the model’s architecture.

The Speech Synthesis Metaservice offers a range of advanced features tailored to meet diverse user needs. One of its notable capabilities is efficiently onboarding new speakers for model training, facilitating rapid integration and adaptability. Utilizing cutting-edge algorithms and machine learning techniques, the metaservice swiftly adjusts to individual vocal characteristics, ensuring seamless integration.

User control is a central focus, empowering users with extensive customization options for speech synthesis. Users can fine-tune emotional inflections, pacing, and intonation to achieve precise outcomes.

Prioritizing user experience, the metaservice provides intuitive interfaces and seamless integration options, whether accessed through web interfaces or programmable APIs.

Text Generation Training Metaservice

Nowadays, text generation models are becoming more and more popular and affordable. The capabilities of zero-shot learning and prompt engineering allow application developers to solve many problems using native language models. However, inherent biases or lack of personalization can limit their effectiveness in certain scenarios. Our Text Generation Training Metaservice addresses these limitations by enabling end users to fine-tune pre-trained language models with their inherent data to personalize language models to solve specialized tasks.

This Metaservice will offer users a wide range of available language models on the one hand and the possibility of customization and personalization of these models on the other. Users will be able to upload their own data to the Metaservice and fine-tune any suitable model of their choice in order to achieve personalisation, goal-oriented skills, and any specific mixture of target biases, custom features, or skills.

Recognizing that our data may not always suffice to obtain the best quality and functionality, the Metaservice will also provide the opportunity to augment the user’s data with data collected in our knowledge bases. This will allow the user to teach the model the tasks presented in the user’s dataset without compromising the quality of standard tasks of this domain or generalization functionality.

Sometimes user data, especially those related to the bot’s identity, may contradict the data provided in our datasets for augmentation. To solve this problem, a contradiction detection module will be implemented in the service. This module will allow users to identify samples in the dataset that contradict their dataset and clear the dataset for augmentation from these samples. Thus, increasing the data set will not negatively affect the ability of the model to stay coherent and consistent with the users’ intent.

The Text Generation Metaservice will allow users not only to try and use various state-of-the-art language models but also to have their own set of the best language models trained to supply individual needs, converse in their preferred style, or assist with tasks tailored to their personal or commercial requirements at scale. In combination with the power of Internet of Knowledge introduced by SingularityNET Platform Text Generation Training Metaservice opens the avenue to utilize the synergy of Knowledge Nodes and generative AI models being retrained to utilize the knowledge in a more efficient way and contribute back to the knowledge acquisition pipelines in order to create a better training dataset for the model retraining cycles making it an easy and simple process.

Conclusion

The Platform is quickly entering into a new phase of its existence. The team is pushing out many improvements that will make onboarding and hosting services much easier, as well as integrating services in application, billing, etc.

At the same time the Platform is scaling significantly in all dimensions. The team is building an innovative infrastructure to serve as a ‘Knowledge Layer’ with tools for deep integration of LLMs, Knowledge Graphs and other data repositories, making them all interoperable by means of our core MeTTa based integration layer.

Perhaps even more importantly, this deep integration of MeTTa is preparing the decentralized AI Platform for a seamless integration with the OpenCog Hyperon AGI Framework. Coupled with fundamentally decentralized infrastructure from our partners NuNet, HyperCycle and ICP, this will position the Platform as THE future distribution channel for true decentralized Beneficial Artificial General Intelligence.

About SingularityNET

SingularityNET is a decentralized Platform and Marketplace for Artificial Intelligence (AI) services founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI).

  • Our Platform, where anyone can develop, share, and monetize AI algorithms, models, and data.
  • OpenCog Hyperon, our premier neural-symbolic AGI Framework, will be a core service for the next wave of AI innovation.
  • Our Ecosystem, developing advanced AI solutions across market verticals to revolutionize industries.

Stay Up to Date With the Latest News, Follow Us on:

--

--