5 Things Architects Should Know About Einstein

Susannah Plaisted
Salesforce Architects

--

If you’re a Salesforce architect your customers have probably started to ask you about generative AI. To maintain their trust, you’ll need to understand what the Einstein 1 Platform is, the unique advantages it provides, and how it’s different from other Salesforce products. This blog discusses five key concepts that architects need to know about Einstein now.

1. Einstein is hyper-focused on Trust

Salesforce’s first core value is Trust. And with all the unknowns about this new technology, trust is at the forefront of the Einstein 1 Platform. The Einstein Trust Layer is a variety of apps and services that support trusted generative AI. From content moderation, to masking personally identifiable information (PII), our generative AI stack, hosted on Hyperforce, allows you to leverage generative AI without leaving the Salesforce infrastructure that you’ve already vetted with your security teams.

Einstein also supports the use of external models that have agreed to a zero retention policy. These external models, like Open AI, exist in a shared trust boundary. This means that your data is not retained by the third-party Large Language Model (LLM) providers and is not used to train their models. If you’ve already built your own LLM, you’ll soon be able to register it in Model Builder using Data Cloud’s Bring Your Own Model (BYOM) capability.

How can you trust that generative AI is bringing value to your business? We’ll soon be providing functionality that will allow you to measure how generative AI is being leveraged by your users. The feedback your users give when interacting with generative AI, like giving a response a thumbs up or thumbs down, will be stored in Data Cloud so you can run analytics to see the performance of your prompts.

2. Data Cloud is a crucial part of the Einstein 1 Platform

To be successful with generative AI, you need good data. The better your data is, the more useful the response from the LLM will be. This is probably not the first time you’ve heard this, but let’s unpack why good data is so important in generative AI.

In order for a prompt to generate a useful response with the business context your users expect, it needs to be grounded. Grounding is the process of enriching a prompt with data that’s specific to your company. This can be done by either adding the additional context directly to the prompt text itself or through a process called retrieval augmented generation. But how will the prompt get the data it needs when your data exists across multiple systems, is extremely large in volume, or is structured in a way that can’t be easily stored in a database table?

All of our generative AI capabilities are built using the Data Cloud infrastructure on Hyperforce.

Einstein System Landscape diagram
Einstein System Landscape

This means that you can leverage the strengths of Data Cloud to ingest petabyte scale data, work with structured and unstructured data across disparate systems, and create data graphs so that you can retrieve the data you need to generate useful responses from an LLM. The fact that Data Cloud is a crucial part of the Einstein 1 Platform is a key differentiator of Salesforce’s artificial intelligence offerings.

3. Einstein supports multiple models and it’s multi-modal

The Generative AI Gateway, a key part of the Einstein Trust Layer, supports multiple models. You might leverage a Salesforce Hosted LLM like CodeGen, Anthropic or Cohere, or an LLM that has established Shared Trust with Salesforce (like OpenAI). Has your company developed a custom LLM? In the future, you’ll be able to connect it to Salesforce with our Bring Your Own Model (BYOM) capability.

Supporting multiple models is a core tenant of our approach to AI. It allows you to continue to use best-in-breed models that are evolving rapidly. In the short term this means that you’ll be able to choose the LLM you want when you build a prompt template in Prompt Studio.

Our approach to generative AI also allows customers to be multi-modal. Einstein doesn’t just support one modality of content generation. With Einstein you can generate text for an email or service response, generate code with Einstein for Developers, or even generate configuration in tools like Flow Builder.

All of this becomes even more relevant as we prepare for autonomous agents, the next wave of AI. Autonomous agents leverage LLMs, not to predict the next word in a sequence, but to orchestrate business tasks by determining the next best action. What will this look like in Salesforce? Soon you’ll be able to interact with Einstein Copilot, an AI agent that you’ll interact with using natural language. When you “converse” with Einstein Copilot it will identify what business goal you’re trying to achieve. As long as you’ve declaratively “trained” the agent on the type of tasks you need to reach your goal, the AI will orchestrate the work across multiple modalities.

And because we keep trust at the forefront, Einstein Copilot isn’t fully autonomous, there will still be a human in the loop. To go deeper on how autonomous AI could impact the way we build systems I recommend exploring this blog by Silvio Savarese, EVP and Chief Scientist of Salesforce Research.

4. Einstein will change the way we think about application lifecycle management

As architects, we are used to running unit tests with deterministic results. When a Flow or Apex code runs we expect a predictable result. But generative AI is non-deterministic, which means that the response that the LLM will generate changes with each prompt sent. This is one reason why we can’t guarantee that the response the LLM generates will be exactly what your user will want. AI is not replacing humans completely. Humans are crucial to the success of generative AI. You’ll work with leaders at your organization to ensure that you stay compliant with your company’s ethical guidelines and domain experts will be the “humans in the loop” as you roll out trusted AI with Salesforce.

How does this affect your application lifecycle management? In addition to unit testing your features, you’ll want to ensure an ethical review of your use cases happens before your start to configure your application. After you’ve built your prompt templates and you’ve tested in a sandbox, we recommend a pilot release to production where your domain experts will continue to test and provide feedback. Feedback will be used to ensure quality and allow you to further refine your prompt templates before a larger beta release.

Generative AI Release Plan
Generative AI Release Plan

5. Einstein has a consumption based model

In a technical blog we don’t typically talk about pricing, but architects should be aware that Einstein is billed based on usage. This is a notable change for Salesforce architects who are not used to a consumption based model. For context, this approach is in line with the way LLM providers calculate the use of their products. If you want an example of how an LLM calculates usage you can explore the OpenAI Tokenizer.

Salesforce’s unit of measure for usage is called a credit. Credits are consumed based on the length of the inputs (prompts) sent to the LLM and the outputs (responses) generated back to Salesforce. It’s definitely something architects should be aware of. Keep in mind, however, that this is just an overview of how the pricing model works. Contact your AE to speak about specific pricing for your organization.

Conclusion

As a customer’s most trusted technical resource, architects need to truly understand how Salesforce products work so they can credibly advise on an optimal approach. I hope that this blog has provided insight into Salesforce’s AI platform, and that you’re inspired to continue to learn more about this exciting technology. Be sure to download the system landscape diagram pictured above, explore roadmap and release artifacts in the AI project resource gallery, and visit the Template Gallery on architect.salesforce.com for even more diagrams.

Resources

--

--

Susannah Plaisted
Salesforce Architects

Lead Evangelist, Architect Relations at Salesforce. Words, thoughts and opinions are my own.