Javelin Cloud

sharathr
Javelin Blog
Published in
5 min readDec 7, 2023
Photo by Trent Erwin on Unsplash

We are excited to unveil Javelin Cloud, an AI enterprise platform anchored by our enterprise-grade Large Language Model (LLM) gateway. This cloud-native platform is designed to provide businesses with safe, responsible access to diverse large language models. Javelin is the most secure way to integrate sophisticated AI capabilities into your applications while maintaining data security, model guardrails, and enterprise governance.

Platform Overview

  • Secure Connection to LLMs: Implement safeguards customized to your application requirements and responsible AI policies. Javelin Cloud is built to be a vital link, connecting businesses of all sizes and industries to advanced large language models.
  • Model Safeguards: custom LLM-based input-output safety guardrails and classifiers for human-ai interactions.
  • Zero-Trust Security Architecture: The platform’s robust security framework guarantees secure access to foundational models, making it an ideal choice for enterprises safeguarding their AI integrations.
  • Security-First Approach: Tailored to meet the specific needs of MLOps, Data Security, and SecOps teams, Javelin Cloud has a suite of tools and features designed for optimum security and compliance.
  • Enterprise-Grade Performance and Reliability: Engineered with our ultra-fast LLM Gateway with low latency inferencing, Javelin Cloud offers exceptional processing speeds and minimal latency. Its enterprise-grade reliability ensures the ability to manage massive volumes of requests efficiently, positioning it as a robust solution for businesses with high-stakes AI requirements.

Features

At launch, Javelin Cloud supports the following features:

Natively supports 100+ LLM Models — including models from OpenAI, Anthropic, LLama2, Google Vertex AI, and Amazon Bedrock.

Security Guardrails with PII/PHI/Sensitive Data Detection — The platform uses AI native models to identify and flag personally identifiable information (PII), protected health information (PHI), and other sensitive data in real time. This feature is crucial for maintaining compliance with data protection regulations.

Keyword and Regex for Restricted Words and Phrases: Javelin Cloud can be configured to monitor and detect specific keywords or patterns using regular expressions (regex), ensuring that certain types of content are identified swiftly.

Input-Output Filtering: Using our low latency content filtering framework, you can block undesirable topics and filter harmful content in your generative AI applications.

Actions on Detection of Sensitive Information: Upon detecting sensitive information, the system can either reject the input outright or notify administrators. This includes configurable alerts to keep relevant teams informed. Rejected messages can be configured separately for customized and friendly end-user responses.

Redaction, Masking, and Anonymization Options: The platform offers robust methods to handle sensitive data, including the ability to redact (remove), mask (cover), or anonymize (de-identify) such information, further enhancing data privacy and compliance.

Model Fallbacks — in scenarios where the primary language models are unavailable or responding slowly, Javelin Cloud seamlessly switches to backup models. This ensures continuous service availability and reliability, minimizing disruptions to business operations.

Secure Credential Vault — the platform includes a secure secrets vault for storing Large Language Model credentials. It automatically injects these credentials during runtime, simplifying the authentication process and enhancing security by reducing the exposure of sensitive credentials.

Chronicle for Auditing & Compliance — as a comprehensive Interaction Archive, the Chronicle is a secure repository that logs every interaction with the models via the gateway. This includes a detailed record of each prompt, request, and response, which is invaluable for auditing and compliance, ensuring transparency and accountability in model usage.

Bring-Your-Own Data Stores — we offer an option for customers to provide their secure data stores for storing archives—with this option, we never store or cache prompts, requests, or responses.

Analytics — Javelin Cloud provides detailed analytics on the usage of different models, broken down by routes and providers. This feature enables enterprises to track and analyze how applications and users utilize the models, offering optimization and resource allocation insights.

What's next?

Initially, Javelin Cloud will be available on AWS US-East region as a closed alpha — we’re opening up free access for early users to kick the tires and help shape our product roadmap. As we prepare to evolve into the Beta phase, introducing a pricing model for production workloads, we will expand the service to include other global regions (including AWS EMEA-Frankfurt) while expanding our offering to Private VPC and multi-cloud deployment options.

Our mission is to help enterprises responsibly adopt foundational models. Javelin Cloud is more than just a platform; it symbolizes our dedication to pushing the boundaries of security and AI integration in enterprises.

We’re also shipping fast — some core areas we’re focused on in the short term:

  • Collection of custom fine-tuned input-output safeguards for human-ai conversations built on LLM-based cyber-security classifiers
  • Expanded Security Capabilities, including real-time scanning of model inputs and outputs for vulnerabilities, content filtering, malware filtering, and more.
  • Sensitive Data Detection across Multi-modal (documents, code, structured content, images, audio, video) model inputs
  • Rich front-end components to enable ease of use
  • More enterprise integrations — Slack, Datadog are all on our roadmap

If this sounds interesting, we’d love to have you try out Javelin!

🚀 Visit our website to learn more

Sign up for a trial and experience Javelin firsthand

👀 Contact our team or schedule a demo for further information

--

--