Atom News | HydroX AI: Building a One-Stop Platform for LLM Security and Privacy

Atom Capital
12 min readNov 6, 2023

--

Investment Update

HydroX AI, a startup in the field of LLM security, recently announced the completion of a $4 million angel round of funding led by Vitalbridge Capital, with participation from Atom Capital and Miracleplus. Founded in February 2023 by world-class experts and hackers deeply rooted in the security industry in Silicon Valley, HydroX AI is committed to building an all-in-one platform for AI security and privacy protection in the era of LLMs.

The advent of LLMs has brought about disruptive innovations, along with new risks and challenges, which have become obstacles to the practical implementation of AI. At the same time, this era presents a significant opportunity for the rise of a new generation of security companies. We believe that the era of LLMs has the potential to give rise to new giants in AI security, and HydroX AI is a promising startup in this regard.

We recently conducted an in-depth interview with HydroX AI CEO ZL to discuss the opportunities he sees in the field of LLM security, his thoughts on LLM security and privacy protection, and to share our investment rationale for HydroX AI.

Guest Intro

ZL: Founder of HydroX AI, with extensive experience in security and infrastructure. Former Global Head of Privacy and Data Protection at ByteDance Group and former Head of Privacy Infrastructure at Facebook.

01. LLM Security — A New Blue Ocean in the Security Field

Atom: Could you introduce your background and and that of your team? Also, please share what led you to decide to start this venture?

ZL: Let me start by introducing myself and our team briefly. I’ve been in the security industry for over a decade, primarily working in the areas of security and infrastructure. My previous role was as the Global Head of Privacy and Data Protection at ByteDance Group, where I was responsible for maintaining global privacy standards and data protection policies. Prior to that, I served as the Head of Privacy Infrastructure at Facebook, where my focus was on building and optimizing privacy-related infrastructure to provide a secure and reliable online experience for hundreds of millions of users. Through my experience at these two leading tech companies, I’ve accumulated best practices in security, privacy, and data protection, enabling me to deeply understand and address various challenges in the realms of security and infrastructure.

Our team members are all experts with over 15 years of experience, hailing from diverse backgrounds in security, AI, engineering, and sales. We are based in Silicon Valley and have a strong focus on the U.S. market. The advent of the LLM era has indeed brought us many exciting opportunities, especially in the field of security. Currently, LLMs are still in a very early stage, with most of the focus and energy directed towards their performance and capabilities. Attention to LLM security is relatively weak, which can result in new security issues such as novel attacks, risks, privacy breaches, and data leaks. While industry experts have started to raise awareness about this, the exploration of this field is just beginning.

We believe that addressing security concerns in big models is crucial. The more powerful the technology, the more essential it is to have appropriate security safeguards. Otherwise, the strength of the technology poses greater risks. Consequently, providing security solutions for LLMs is one of the most urgent needs in the AI field. However, due to the industry being in its infancy, there are no mature solutions available in the market, making it an almost blank blue ocean market.

Our team has decided to immerse ourselves in this space — our years of cross-disciplinary experience in AI, security, and engineering align perfectly with the challenging new domain of LLM security. Simultaneously, entering the market early offers us a significant first-mover advantage. We have the opportunity to collaborate with key industry players (such as leading big model manufacturers) to define many yet-undefined concepts, participate in and lead the development of related security standards, and potentially shape the future of this industry.For me, this is a challenging and enjoyable endeavor, a core reason that attracted me to entrepreneurship.

02. Characteristics, Opportunities, and Challenges of Security in the Era of LLMs

Atom: What do you think are the new characteristics, opportunities, and challenges in security during the era of large models?

ZL: As mentioned earlier, the most significant characteristic of this field is that it’s very new and premature. There are no standards yet, and most people are still at a very early stage in their understanding of security in large models.

Looking back at the development of the security field, it has always followed a certain pattern. It starts with awareness, where people gain a sufficient understanding of risks and issues. Then, problems arise, specific solutions are developed based on these problems, and as more problems are solved, a methodology emerges to address most issues. Eventually, an infrastructure is established. However, the field of security in large models is still in its earliest stages, and even awareness has not been fully established. The industry lacks a clear understanding of the risks associated with LLMs, how these risks manifest, and what the potential losses may be.

Looking ahead, I believe several core driving factors will steer the development of security in large models:

  • Event-driven: Increased attention and demand in this field will gradually rise following impactful security incidents.
  • Regulatory-driven: As security in large models affects the interests of all users, it has become a public concern. Governments worldwide, such as the United States, the European Union, and China, are strengthening legislation and regulations concerning AI. The core focus revolves around security and privacy protection. With these new regulatory policies emerging, companies will face a multitude of new compliance requirements, creating a fresh “compliance” market.
  • PR-driven: As users become increasingly aware of the importance of security in large models, security will become one of the core competitive elements for all enterprises involved in large models. Companies will emphasize their security in external communications, further propelling the rapid development of the security market in large models.

At this stage, all these driving factors are still in their very early stages. I believe that in the next 6–12 months, the most important thing will be to establish awareness. How can awareness be built more effectively and quickly? We believe that evaluation is an excellent means — before problems arise, through evaluation, it can pre-warn manufacturers of potential security vulnerabilities, possible future issues, and the consequences if not addressed. Evaluation can help concretize and visualize large model security issues, presenting them to practitioners and users, and accelerating the establishment of awareness in the industry.

This is the opportunity we see in the field of security in LLMs — a demand-driven market with scarce supply and rapid growth. In this early stage of the industry, we have the opportunity to lead and build awareness and consensus on security in LLMs, becoming key leaders and contributors to setting standards. Occupying the brand recognition of security in LLMs, we can accompany the industry from its early wilderness to maturity, sharing the dividends of this rapidly developing new industry.

03. About HydroX AI

Atom: Could you introduce HydroX AI’s products and services?

ZL: AI safety is currently in its early stages of industry development, and there is a relative lack of mature problem detection capabilities. HydroX AI takes LLM security and safety assessment as a starting point and is dedicated to providing a complete one-stop platform solution. We aim to help AI companies systematically enhance their capabilities and accelerate the secure iteration and implementation of their businesses through automated detection, defense, and monitoring.

Prompt injection is an issue that has been of early concern in the AI safety field, and it carries a relatively high risk. At the beginning of the year, HydroX AI provided a prompt injection validation playground to help people understand such problems and offered initial detection capabilities to discover and mitigate these issues. This was one of HydroX’s earliest attempts.

Recently, we have just released the first version of security evaluations for LLMs. This is the first time in the industry that there is a relatively comprehensive evaluation of well-known LLMs at a granular level. Although this is only the first version, we still have many evaluation aspects that have not been matured and published. However, from this evaluation, you can already see some obvious issues. In the future, we will start from the perspective of problem discovery, provide necessary defense and monitoring capabilities based on real problems, and gradually improve our product portfolio.

Here, we can share some interesting findings from this evaluation:

  • Only 2 LLMs achieved a perfect score in both Security and Safety, namely GPT-4 and Inflection AI.
  • LLMs generally perform well in terms of safety but relatively poorly in adversarial security.
  • For the safety aspect, misinformation is the biggest issue, while for adversarial security, camouflage is the biggest problem.
  • While GPT-4 and Inflection AI received perfect scores, three non-U.S. market large models closely followed. However, the final two large models performed poorly, noticeably lagging. One was from the U.S., and the other was from a non-U.S. market.
  • Note: Security here leans towards security attacks and defense, while Safety encompasses the alignment, interpretability, and robustness of large models.
The LLM security assessment released by HydroX in September.(https://stg-www.hydrox.ai/)

Atom: Please tell us about HydroX AI’s vision and long-term plans.

ZL: HydroX AI’s vision is to “Accelerate AI safety & Build safe AI.” In the long run, we hope to become the industry’s leading large model security company, assisting all large models in becoming safe and compliant. Furthermore, we plan to introduce our own secure models in the future, enabling everyone to use large models for tasks without worrying about security or privacy concerns.

Atom: Could you share some recent developments at HydroX AI? What are you currently working on?

ZL: We have systematically conducted AI security assessments for all publicly available open-source and accessible closed-source large models globally. Based on these assessments, we have generated detailed risk reports, data dashboards, and are preparing related papers. We will soon release the first version of assessment results and risk reports, which are focused on text-to-text models. In subsequent updates, we plan to expand security assessments to cover images, videos, and code generation.

On top of assessments, we will provide defense tools for identified security risks. As I mentioned earlier, the immediate priority in the field of LLM security is building awareness, and assessments and risk reports serve as excellent means to achieve this.

As an AI-native team, we are also exploring how to build new products and technology architectures based on AI. An interesting direction is AI’s self-adversarial learning, where AI models iteratively improve their security defenses through competition (attack models continually search for new security vulnerabilities, while defense models upgrade their protective capabilities in response to these rapidly evolving attacks). In our upcoming security assessments and risk reports, we have already begun experimenting with this self-adversarial approach.

04. How will the market landscape for AI safety evolve in the future?

Atom: How do you see the future market landscape in the AI safety field, such as market size, growth potential, competition, and the possibility of monopolies?

ZL: We conducted a detailed market breakdown assessment internally and estimated the potential market size for LLM security at around ten billion dollars (excluding traditional AI security segments). Considering that the market is still in its early stages, I believe the compound growth rate for this market will be above 50%, and it will enter a period of explosive growth after industry and user awareness is established.

Regarding market competition, I’d like to discuss the competitive landscape first. At present, there are two main categories of competitors in this field. One is traditional security companies like Fortinet and Palo Alto Networks, and the other is startups like us. For the former, LLM security is a completely different domain from traditional network security. If traditional security companies want to enter this field, they essentially have to start a new venture, so everyone’s starting point is similar in terms of business. From this perspective, we can consider all competitors in this market as new startups. In such a competitive environment, the key to gaining a competitive advantage lies in the professionalism of the team, the speed of business advancement, and the ability to quickly collaborate with the industry’s most important customers, jointly influencing industry awareness, consensus building, and even participating in industry standard setting, and growing with the industry. In this regard, I am very confident in our team and the pace of our current business. We have already established deep collaborations with several leading LLM manufacturers worldwide, working together to address LLM security issues.

Now, let’s talk about the long-term perspective on market structure. I have a personal judgment that in the past, the security field was a multi-player market, and having a 20% market share was considered significant. However, with the advent of the AI era, this market structure may be disrupted, and we might see the emergence of monopoly giants in the AI security field. The core reason for the multi-player market in the security sector in the past was the limitation of human efficiency. This led many security companies to only develop one or two specialized products, unable to meet customers’ diverse security product needs. However, the efficiency improvements that LLMs bring could potentially break this limitation. We now observe that LLMs can boost efficiency tenfold or even a hundredfold in some cases, especially for high-level programmers who have strong AI capabilities. This enhancement can significantly break the efficiency and resource allocation bottlenecks. Once this bottleneck is broken, there is a possibility for monopoly giants to emerge in the market.

Atom: In the long term, what do you believe will be the moats and competitive barriers built by HydroX AI?

ZL: I believe the moats will primarily come from the following aspects:

  • Expertise in LLM Security: LLM Security, as a new and cutting-edge domain, currently lacks teams with relevant know-how. Our team has accumulated years of experience in AI, security, and engineering. This enables us to efficiently combine these domains and swiftly transition into the field of LLM security.
  • Experienced Team: We are an international team established in Silicon Valley, gathering world-class experts with extensive experience in security, AI, and engineering. Several members of our team are among the world’s top hackers, with each team member having over 15 years of experience in the security field. Our team’s international background, rich experience, and a comprehensive understanding across multiple domains such as AI, security, and engineering make us one of the most suitable entrepreneurial teams for the field of large model security.
  • Technology: As a native startup in the AI era, HydroX AI aims to maximize the leverage of AI capabilities. We provide highly automated solutions driven by AI. Everything is supported by machines, minimizing human intervention. All experiences, detections, and solutions are stored and reused across various scenarios, allowing our products to rapidly self-iterate and evolve.
  • Top-tier Benchmark Clients: Presently, we have engaged in deep collaborations with various leading companies in the large model domain, assisting these companies in detecting, alerting, and defending against security issues related to large models. Additionally, we actively contribute to industry consensus building and the establishment of certain industry standards. These top-tier clients in the large model domain are key players in the market, and our collaboration with them is one of the significant business moats we’ve built.

05. Partnering with Atom

Atom: We’re delighted to have become investor in the initial round. Could you share why you chose Atom Capital?

ZL: I believe that a good investor should have a deep understanding of the field and be willing to assist the project by providing resources and support beyond just funding. This is what I consider an ideal investor. Compared to many investment organizations I’ve interacted with, Atom has a very hands-on approach. You have profound insights into AI and are willing to spend time sitting down with the startup to discuss the business, introduce resources, and provide assistance. This is something that many organizations find challenging to proactively do or accomplish. These resources, support, and business-related discussions have been very helpful to us. We are pleased to have early investors like Atom accompanying us on our growth journey.

06. Why We Invest in HydroX AI

With the rapid advancement of LLM, its security issues are becoming increasingly important. LLMs have a wide audience, diverse use cases, and deep interactions with users, surpassing any product from the internet era. This not only showcases the significant influence of AI but also implies that if AI encounters security issues, their ripple effects will be hard to predict and could have catastrophic consequences. This signifies that solutions related to large model security will become a strong market necessity — a driving force both from regulatory legislative actions and the genuine business requirements of companies. Therefore, LLM security, including alignment, adversarial security, and privacy protection, has been our focus as an investment direction from the beginning(https://medium.com/@atomcapital/security-of-llm-alignment-privacy-6becddda3ea1) We believe that with the rapid development of AI technology, a new LLM security market will emerge, and it will give birth to new giants in the field of AI security.

The security domain has always been a high-threshold entrepreneurial field, requiring extremely high standards for team backgrounds. The team at HydroX AI is one of the reasons that attracted our investment — the team comprises numerous world-class experts in AI, security, and engineering, meeting the demands of this field for a diversified team background. CEO ZL holds a deep insight and contemplation regarding this field. As large model security is a completely new frontier, early entrants to the field have the opportunity to establish industry consensus with the leading participants, lead the development of industry standards, and gain a first-mover advantage. We have confidence in HydroX AI’s potential to grow into the next generation of AI security leaders in this new field.

--

--

Atom Capital

We are a newly founded venture investing in early stage companies, including AI, Web3, Big data and Cloud native.