As promised, we’re sharing some really exciting news for Architects during DreamTX. And some of the biggest news is the new Trailhead content for architects, and our newest architect certification: the Certified B2C Solution Architect Certification.
First, let’s dive into what this new certification is all about, and then let’s look at what’s new on Trailhead.
For many companies, digital transformation is not a long-term goal — it’s a must-have. For companies looking to connect various systems and data sources to into a seamless, unified customer experience, the skills of an architect are critical to success. …
Bonny Hinners and Doina Popa
Architects frequently head into new organizations and projects with a variety of stakeholders who have unpredictable levels of expertise of their own. Are they familiar with the Salesforce Platform? Are they experts in system integrations and application design? Collaborating with each new organization requires not only learning about the expertise of others but also establishing credibility and getting people from a wide range of backgrounds to trust our expertise as quickly as possible.
In this conversation, Bonny Hinners, Principal Customer Success Architect at Salesforce, and Doina Popa, Founder and CEO of InnoTrue, discuss their strategies for establishing credibility as experts themselves. …
Citrix Virtual Apps and Desktops enable users on any device to share client-side computing resources hosted in a data center. Many Salesforce customers access the application from Citrix environments. When these customers transition from Classic to Lightning, some users experience slow page loads due to various factors that can lead to low Octane scores.
Earlier this year, Jim Tsiamis published a helpful post on the Salesforce Developers Blog titled Improving Lightning Performance on Virtual Desktops. Key takeaways from that post include:
In Part 1 we covered monitoring and diagnostic tools for development and performance related use cases in a typical Salesforce Application Lifecycle Management (ALM) model. Here in Part 2, we review tools available for security and compliance and for release and maintenance purposes.
Security is paramount in maintaining your organization. With so many security setting options, how do you know which one is a potential issue and which ones are not conforming to your security policy standards ? Keeping tabs on this can be a daunting task. Wouldn’t it be great to have a one-stop shop to show all potential issues, recommendations, and fixes? Salesforce Security Health Check provides exactly that. It doesn’t stop there. If you have multiple organizations, then you can pull this information using the Tooling API and display information or take actions on your custom monitoring dashboard. How cool is that?
Another important area to monitor is making sure your customer or partner community is not able to access more information than needed. This is where you can use the Guest User Access Report, which gives you an overview of the objects and permissions guest users can access from your public communities.
Vulnerabilities in your code are equally important and should be monitored throughout your development and build process. Static code analysis should be run manually or automatically to identify security vulnerabilities and other code quality issues. To take it one step further, make it a part of your periodic (either monthly or quarterly) maintenance schedule to run a full static code analysis on your code base. Some popular tools available for this are the Force.com code scanner (offered in partnership with Checkmarx), PMD, and Codescan.io. …
Just as there is no one tool in a mechanic’s workshop suited to all tasks, there is no one tool that can perform all Salesforce Application Lifecycle Management (ALM) tasks. Instead, you have a comprehensive set of tools available to you. Admittedly, it can be overwhelming to understand what tools are available and when to best apply them. This two-part series covers tools for diagnostics and monitoring. Based upon their primary use cases, we loosely categorize the tools into four categories:
Continuing to grow your skills is an important part of career development. Sometimes, we get the chance to try new things in our day-to-day work. Other times, we need to step outside our daily routine to find new challenges. Mentors can provide trusted guidance in helping identify new areas for growth. How can architects identify growth areas and build a relationship with mentors to address those areas? What makes a really great mentor? How do you ask to be mentored?
What is your philosophy on “mentorship”?
Kim: For me the word “mentor” does not really resonate. I actually relate more strongly to the word “coach”. This may be because I was an athlete, so I tend to look at it from more of a coaching perspective rather than a mentoring perspective. I don’t believe there is one person who can identify all the skills I need to improve on and then lay out a plan (the who, what, and how) for achieving this. I think that is why I believe mentorship is personal to each individual. For me it is finding a network of people to connect with depending on the topic. I really don’t have one person that I would call my mentor. …
Several years ago, we (Sam and Steve) built an orchestration system for a large enterprise customer. This system was capable of synchronizing data across more than ten Salesforce organizations at scale. The original solution was a multicloud architecture, based on Heroku Connect. Data was routed from Heroku Connect through a pipeline built on AWS, using Kinesis and Lambda.
Since that time we’ve both had conversations with customers about this architecture, and those conversations led to this post. We wanted to revisit that original architecture and redesign it for the current Salesforce platform. …
Salesforce survey invitation URLs are typically hundreds of characters long, which can create issues for anyone attempting to send them to mobile devices via SMS messages:
Third-party URL shorteners may solve these issues on a smaller scale, but they become costly as message volumes increase. This issue used to be fairly rare, but it’s becoming increasingly common. Universities, for example, are using tools like Work.com to administer daily wellness check surveys to tens of thousands of students as part of their campus reopening protocols. Large corporations with tens of thousands of employees are facing similar issues. In these scenarios, having to pay for multiple SMS messages to every recipient every time a survey is sent out (which is usually daily) will cause messaging costs to skyrocket. And the fees required to have a third-party system shorten tens of thousands of URLs each day aren’t much better.
If you find yourself in a scenario like this, you can use the approach described in this post to shorten the survey invitation URLs (or any other URLs) yourself without having to rely on a third party.
To start the process, you’ll need to know the URL itself along with a unique ID that can be used to represent it. For a survey invitation URL, you can use the Survey Invitation record’s 18-character Salesforce ID (from the
SurveyId field), which can be sent to Marketing Cloud through Marketing Cloud Connect.
Within Marketing Cloud, you can build a data extension to store both of these values prior to initiating your send and create a CloudPage to handle the redirect code (more on that below). When you compose your SMS message, instead of including the full survey invitation URL, include the URL for your CloudPage with the survey invitation ID as a parameter.
Event-driven application architectures have proven to be effective for implementing enterprise solutions using loosely coupled services that interact by exchanging asynchronous events. Salesforce enables event-driven architectures (EDAs) with Platform Events and Change Data Capture (CDC) events as well as triggers and Apex callouts, which makes the Salesforce Platform a great way to build all of your digital customer experiences. This post is the first in a series that covers various EDA patterns, considerations for using them, and examples deployed on the Salesforce Platform.
Back in April, Frank Caron wrote a blog post describing the power of EDAs. In it, he covered the event-driven approach and the benefits of loosely coupled service interactions. He focused mainly on use cases where events triggered actions across platform services as well as how incorporating third-party external services can greatly expand the power of applications developed using declarative low-code tools like Salesforce Flow.
As powerful as flows can be for accessing third-party services, even greater power comes when your own custom applications, running your own business logic on the Salesforce Platform, are part of flows.
API-first, event-driven design is the kind of development that frequently requires collaboration across different members of you team. Low-code builders with domain expertise who are familiar with the business requirements can build the flows. Programmers are typically necessary to develop the back-end services that implement the business logic. An enterprise architect may get involved as well to design the service APIs.
However you are organized, you will need to expose your services with APIs and enable them to produce and consume events. The Salesforce Platform enables this with the Salesforce Event Bus, Salesforce Functions, and Streaming API as well as support for OpenAPI specification for external services.
Heroku capabilities on the Salesforce Platform include event streaming, relational data stores, and key-value caches seamlessly integrated with elastic compute. These capabilities, combined with deployment automation and hands-off operational excellence, lets your developers focus entirely on delivering your unique business requirements. Seamless integration with the rest of Salesforce makes your apps deployed on Heroku the foundation for complete, compelling, economical, secure, and successful solutions.
This post focuses on expanding flows with Heroku compute. Specifically, how to expose Heroku apps as external services and securely access them via flows using Flow Builder as the low-code development environment. Subsequent posts will expand this idea to include event-driven interactions between Heroku apps and the rest of the Salesforce Platform as well as other examples of how Salesforce Platform based EDAs address common challenges we see across many of our customers…
Part 1 of this series covered why you might want to think about data masking and techniques for implementing it. In part II, we show you how to mask data in Salesforce and provide tips for getting your project started.
With data masking, you have two fundamental options: build it yourself or buy an off-the-shelf solution to do it for you.
If your needs are simple and straightforward, and you have development resources available, the self-build options could be worth considering.
Homegrown solutions tend to be written in Apex and automatically applied, given complexity and performance needed at scale. The code is responsible for finding and replacing data values with either random values (anonymization) or values selected from a library you build or acquire (pseudonymization), or possibly removing the data altogether (deletion). The data masking is applied to any object records and fields that need treating, in as many of your sandboxes as needed, after every sandbox refresh. It’s a good idea to add a scheduled job and on-demand option to ensure new data added post-refresh or post-data load is also treated. …