Artificial Intelligence & Blockchain: Innovations and Applications in Web3

Chris Smalley
11 min readSep 6, 2024

--

Artificial intelligence has risen to prominence as a significant technological force in the 21st century. The technology has steadily evolved into a concreate reality, having tangible impacts across diverse sectors of society. AI’s journey has been marked by periods of progress and stagnation; however, the technology most recently has seen rapid growth and improvements in capabilities, most notably with the release of large language model-based chatbots like ChatGPT. This current wave of AI development is characterized by a potent combination of machine learning and deep learning, propelling the technology towards unprecedented possibilities.

AI’s expansion can be attributed to the surge of data in the world as the relationship between the two is fundamentally intertwined. AI systems, particularly those based on machine learning and deep learning techniques, require large volumes of data to learn, improve, and deliver accurate results. In the 21st century society’s production of data has truly exploded — more data is created per hour today than in an entire year just two decades ago[1]. New applications are being developed constantly that require more data such as sensors, metaverses, holograms, health tracking, and autonomous vehicles. The increased size and intricacy of the datasets generated from these applications challenges traditional analytical methods which is where AI will play a vital role in helping us derive insights of the data. Furthermore, the availability of large datasets improves AI systems’ precision and intelligence, contributing to a beneficial feedback loop. In turn, as our production of data grows exponentially, we will need assistance to analyze it which is where AI will provide immense value. For reference to the size of data we produce, by the time you finish looking at this chart, 500 hours of video will be uploaded to YouTube[2].

We are already starting to see a breakthrough in real-world value with the release of ChatGPT. The large language model (LLM) has been so popular because of its ease-of-use and the fact that users don’t need training or experience to benefit from it. In part, this is why mass adoption has been unprecedented and news outlets around the world have turned their attention to AI. Notably, ChatGPT has advanced larger AI adoption as it is a platform where consumers can easily interact with the technology and begin to understand how LLMs work, removing the sensationalized terror that often accompanies AI developments. For reference, it took ChatGPT just five days to reach one million users, one billion cumulative visits in three months and an adoption rate which is 3x TikTok’s and 10x Instagram’s[3]. To achieve this feat, OpenAI, the creator of ChatGPT, trained the model on 20 billion of GPT-3’s parameters to produce a human conversational tone. The engine underlying ChatGPT was Generative Pre-Trained Transformer 3 (GPT-3) and was trained on a then-unprecedented size of 175 billion parameters which means that it generates a response based on an analysis of 175 billion different variables. Today, customers that pay for the premium service of ChatGPT Plus receive access to GPT-4 which utilized even more parameters, although OpenAI does not disclose the exact number. Furthermore, OpenAI employs about 375 employees which conceals the fact that they had to hire 1,000+ remote contractors to label data to train the underlying engines. The immense level of training that has gone into the foundational models has led to thousands of use cases, hence the unmatched level of adoption. The AI models can be used quite effectively for virtual assistance, customer support, content generation, tutoring, coding, creative writing, travel planning, and more. Additionally, the model is powerful for data extraction like summarizing text and optimizing web browsers. The thousands of use cases showcase the numerous possibilities of just large language models; however, artificial intelligence can unlock even more when combined with the financial and technological primitives of blockchain.

The intersection of AI and blockchain will create new opportunities and enhance existing applications. AI will simulate intelligent behavior to solve complex problems, while blockchain offers a secure and decentralized method of conducting commerce and transmitting data. When looking at AI and blockchain technology, both can bring trust and transparency. Blockchain offers an immutable and tamper-proof ledger that ensures data is reliable, accurate, and secure, while AI can provide an audit trail of its own decisions, further enhancing transparency and accountability. Ultimately, we believe that AI and blockchain technology will be an area of complementary growth in the future.

Data Infrastructure

As the crypto industry continues to grow there will be more of a need to properly structure data coming from blockchain networks as these networks are producing oceans of new information. This information will be critical for both humans and artificial intelligence to analyze and interpret. One company that we believe is at the forefront of building out this data infrastructure for the future is The Graph. The Graph is a decentralized protocol for indexing and querying blockchain data. They make it possible to obtain information from blockchain networks which is fundamentally valuable because blockchain properties like finality, chain reorganizations, and uncle blocks make it complicated to index. For example, Google in the early days of the web organized information so that everyday users didn’t need to know a URL to access a specific website but could instead search through Google’s indexed content to find the website they needed. Similarly, The Graph looks to solve these complex problems for web3. The Graph solves these issues in an open and decentralized manner by combining innovative technology and powerful incentive structures to index the web3 data. We believe The Graph will serve as a fundamental building block for the world of AI in web3. For reference, The Graph Network can be seen below[4].

Furthermore, the protocol has incorporated AI for automation with two tools, AutoAgora and the Allocation Optimizer, which help Indexers increase their protocol performance and boost revenue. Indexers are the node operators and decide which subgraphs to index. At its core, these AI tools will help The Graph serve queries to its users. In the near future, The Graph will be able to utilize AI and LLMs to access and summarize The Graph’s vast amount of information which would allow anyone to access the information intuitively. The company can also integrate AI to assist with technical issues around indexing or query language. Furthermore, The Graph is uniquely positioned to properly train AI models because The Graph can provide a wealth of data that is verifiable and accurate, unlike data used in training LLMs like ChatGPT. For more on these possibilities, The Graph’s Core Devs discuss AI & Crypto here.

Another company building out the necessary data infrastructure on web3 is PYOR (Power Your Own Research). PYOR looks to bring auditable institutional-grade infrastructure for investment decision-making. They recognize the significant need for robust data infrastructure to facilitate onboarding institutional investment in digital assets. The company recently introduced its Terminal product which is a plug-and-play data terminal that enables institutions to access and interact with blockchain data and create private queries through a customized dashboard. We believe companies building out the data infrastructure of web3 will unlock significant value in the future and will provide the necessary foundation for AI systems.

Visual Media

One of the key challenges of AI-crypto projects is the computational limitations of blockchain technology. Blockchains were designed to securely and immutably record transactions in a decentralized manner which makes it difficult to utilize AI inference on-chain. Alethea AI is one project that is circumventing this limitation by performing AI compute off-chain and then interacting with smart contracts on-chain. This allows Alethea AI to be a pioneer in creating intelligent NFTs (iNFT) or interactive AI characters, broken down below.

Source: Alethea AI

As seen above, the process of creating an iNFT involves fusing a compatible NFT with a Personality Pod NFT. Personality Pods are ERC-721 NFTs created by Alethea AI that represent the intelligence and personality of the iNFT, allowing the Personality Pod to get locked into the smart contract but leave the underlying NFT untouched. Users can lock up certain amounts of the Alethea token (ALI) to raise the Personality Pod’s intelligence level which unlocks the AI to perform more actions or services. This intelligence comes form Alethea AI’s AI Engine which is a collection of proprietary AI models that provide AI capabilities such as real-time lip-sync for avatars with a neutral facial expression, audio speech recognition, comprehension, speech synthesis and voice generation, and intelligence response.

Alethea AI’s technology will lead the development of synthetic media as it will allow users to create different intelligent avatars, have verifiable ownership through the NFT, and then earn from the NFTs output. For example, users could train their iNFT by engaging in the Intelligent Debates product — the more users engage in the debates, the better the iNFT’s responses will be. In turn, users could then sell the content that the iNFT produces. Currently, more popular iNFTs are selling for over 50 ETH like Hercules which sold for 63 ETH. Additionally, because training an iNFT takes work, Alethea AI has created incentives to compensate users with ALI credits for participating in staking events that train the AI models. The training dynamic is especially important because Alethea uses LLMs with few-shot capabilities, meaning once the models have been trained, they require only a small amount of data to generate consistent and coherent language outputs. Alethea AI’s network effect is considerable because it creates a beneficial framework for all ecosystem aspects, offering value to its users. We believe the network effect[5] will be mutually beneficial for all stakeholders as opposed to one centralized entity and it will lead to new possibilities in synthetic media.

Verification and Authenticity

As AI becomes increasingly integrated into our lives, issues surrounding identification, cybersecurity, and authenticity have become more prominent. AI systems have now evolved to mimic human behavior acutely well and the progress on this front won’t slow down anytime soon. For this reason, we will need innovative ways to provide identification and authenticity to ensure that we are interacting with who we want to be interacting with. Civic is one company at the forefront of this development that is building trust into web3 with on-chain identity. The Civic Pass provides Proof of Personhood (PoP) which establishes a user’s humanness and uniqueness. The Civic Pass works with video selfies to determine whether a user is a human or a bot and can verify that it is one user to one wallet (more effective marketing / airdrops). Users can choose which details they would like to share with an application and then keep private others[6].

Civic presents a necessary solution given the fact that AI chatbots can now pass for humans in online interactions. Ensuring proper identification mechanisms is crucial to prevent potential abuses and to maintain trust in AI-enabled systems.

Furthermore, LLMs like ChatGPT have enabled advanced forms of phishing and ransomware attacks. Historically, phishing emails would be easy to spot because they would incorporate a typical red flag — strange greetings, misspelled names, poor grammar, and high-priority requests; however, the advent of ChatGPT has removed those red flags and easily enabled more sophisticated emails. Now attackers can simply use ChatGPT to refine their email and make it almost indistinguishable from a legitimate email. For this reason, projects like Token are providing the necessary solutions to prevent phishing and ransomware attacks. With the MFA solution, Token Ring, users have a wearable authentication device that removes the need for passwords to prove credentials. We believe that Token provides the necessary defense against a new wave of potential AI-enhanced attacks.

Another crucial related area is verifying the authenticity of products both online and in the physical world. AI can assist in creating high-quality counterfeit products by analyzing and replicating designs, textures, and other characteristics of real products. Moreover, machine learning can be trained on datasets of real products to improve the quality of fakes over time. This dynamic has created the pressing need for product verification and provenance. Everledger is leading in this area by providing technology solutions to increase transparency in global supply chains. Everledger leverages blockchain and IoT to offer supply chain transparency, object traceability, trustworthiness of claims, and useful digital identities. For example, Everledger can work with fashion brands or diamond companies to offer transparency on their sourcing practices which builds trust with the customer and the customer can know that they have a verified product when buying that was humanely and sustainably made (if the company was promising those facts). Therefore, just as AI can enhance the ability to create counterfeits, blockchain can defend against this attack vector by proving provenance. Interestingly, this new aspect could provide a moat for established and admired brands as they will be able to protect against some of the counterfeit supply. An overview of the diamond supply chain and inputs to Everledger’s blockchain can be seen below[7].

Marketplaces

Another industry that stands to benefit from artificial intelligence but is not discussed as often as the above is marketplaces. Marketplaces will be improved because of the increased personalization from machine learning algorithms. AI will be able to analyze a user’s behavior to generate personalized recommendations and improve search functionality. AI could understand a user’s intent and contextual information, thereby upgrading the search results and the discovery process for users. Talent marketplaces are uniquely positioned to benefit from these possibilities because of the historical complexities involved with hiring and recruiting. AI will be able to significantly improve the matching process between candidates and job opportunities. By analyzing a candidate’s skills, experience, and preferences, as well as a job’s requirements and benefits, AI can help identify the most suitable matches. This not only improves the chances of a successful hire but also reduces the time and effort required in the job search and recruitment process. Furthermore, AI can use historical data to predict future trends such as the demand for certain skills or roles, salary trends, and the likelihood of a candidate accepting a job offer. This information can help employers and job seekers make more informed decisions. On the applicant front, AI can tailor job recommendations based on a candidate’s career aspirations or it can suggest personalized training or development opportunities to help candidates improve their employability. For employers, they can use AI to automatically screen applicants by saving recruiters significant time and effort. This would also reduce bias because the AI models would be credibly neutral in selecting candidates based on qualifications, skills, and experience rather than age, gender, or race. At Future Perfect Ventures, we are excited to support companies like Braintrust, and Andela that are innovating in the talent marketplace and reducing bias from the decision-making process.

Through blockchain technology, Braintrust is able to properly build out a decentralized talent network and incorporate novel crypto incentives. Users can earn the network’s native token (BTRST) by introducing and onboarding clients (demand) and talent (supply) to the network. Clients can also use BTRST to enhance their job postings, and talent use BTRST to improve their proposals with employers or take courses to level up their skills. Also, because it is a decentralized network, price discovery is much more streamlined between both the employer and candidate. This efficiency allows talent to find preferred jobs quickly while employers can cut their recruiting costs by up to 70%[8]. Similarly, Andela is revolutionizing the developer talent market in Africa by being the go-to source for developers both in Africa and the world. Andela uses AI to properly pair candidates in its network with job postings by member employers worldwide. It analyzes data including the applicants’ technical skills, training and experience, workplace compatibility and other variables that can help foster productive teams. This differentiation has led Andela to have one of the highest success rates of 96%, which means new hires stay with their employer for at least a year and half. As seen with Braintrust and Andela, we believe machine learning will improve talent marketplaces and allow for a truly global interaction between employers and candidates.

[1] https://www.seagate.com/files/www-content/our-story/rethink-data/files/Rethink_Data_Report_2020.pdf

[2] Domo

[3] BofA Global Research “Me, Myself and AI — Artificial Intelligence Primer”

[4] Thegraph.com

[5] Alethea.ai

[6] Civic.com

[7] Everledger.io

[8] https://www.usebraintrust.com/whitepaper

--

--

Chris Smalley

CIO 1881 Capital Partners | MBA Wharton | Former Banker