Introduction to The Tranquil Data™ Trusted Flow Edition

Tranquil Data Blog
Tranquil Data
Published in
11 min readFeb 14, 2024

Yesterday we took a deep-dive into the core of Tranquil Data. Today we’ll cover a second, streamlined product edition that is part of “Dragons.” It’s built on top of the Tranquil Data™ Context Engine as a stand-alone container. It embeds policies and models, hides APIs, and presents simple user interfaces with a different audience in mind.

Empowering Legal, Privacy, and End-User Roles

Tranquil Data is a powerful tool that technologists use to materialize, track, and apply context. If you read the discussion on what Data Context is, however, you see that it starts with contracts, regulations, and privacy policies. This is the domain of legal and privacy roles, who own these requirements and the ensuing processes which ensure that the requirements are met. This is also where the disconnect starts in almost every organization.

If you play a legal role, you’re an expert in the nuance of contracts with third-parties and what they obligate your organization to do when handing related data. What you’re not is an expert in databases, coding, data pipelines, algorithms, or data warehouses. In practice you have no idea what’s happening in these systems, which means you’re constantly in a state of discovery and uncertainty. It also means that growing revenue, by building something new or signing a contract with new terms or regulatory requirements, brings increasingly untenable risk. You want to help your company chart that growth, but can’t ensure that any new requirements surfaced by those opportunities will be handled correctly.

Similarly, if you play a privacy role then you’re an advocate for your users and are responsible for making sure that the policies you put in front of them represent what you put into practice. For example, if you tell a subset of users that their data won’t be used to train an algorithm, then you must ensure everyone inside your company knows and follows that promise, even while the algorithm is trained by a different subset of data. This results in constant inventory and mapping processes, and daily requests from your organization about whether a given set of data can be used for a specific purpose. What you’d probably rather be focused on is using strong privacy and transparency as a competitive advantage.

Given this, the Tranquil Data™ Trusted Flow Edition has two key goals:

  1. Invert these relationships, so that legal and privacy roles define and own the behavior of systems, ensuring that correct use is automated and technical roles are freed from building ad-hoc enforcement
  2. Provide a product simple enough to configure and deploy that managing and enforcing the rules becomes self-service and technical roles enjoy a net-productivity improvement

To address goal #2 we have included in “Dragons” a stand-alone product offering that is pre-configured with all the pieces you need to address common flows out of the box. It’s a single container that can be provisioned in any environment, and in its simplest configuration only requires a filesystem to deploy. It’s up and running in less than 30 minutes, and exposes 4 API calls that need to be connected to the rest of your environment at known interface-points (below). As you grow with the product there are additional integrations to take advantage of, but just to get started is less than a day of effort.

The product has three components that address goal #1.

Component 1: Policy Configuration

The Tranquil Data™ Trusted Flow Edition works by asking legal and privacy roles some basic questions and then gives those roles a User Interface to configure and own the evolution of rules going forward.

To get started, we ask how you categorize data, e.g., which fields constitute contact information, authentication, demographic data, etc. By “fields” we don’t mean entries in a database; there’s a separate part of the product for tech roles to fill in those mappings for the systems that they administer. What you’re providing here is the kind of language you’d put into a contract or platform terms, and by entering this mapping into Tranquil Data, you now have a single source of truth for this categorization that everyone across the organization can reference. This also gives you a chance to say, e.g., that email as login is different than email as contact information.

The product now invites you to flesh out all of the valid uses of data, giving each purpose free-form summary text and an allowed set of categories. Broadly, this covers:

  1. The rights that users intrinsically have to their own data, which may vary by location or age, and if they’re considered minors, what rights their parents (etc.) have as well
  2. Valid purposes for using data internally (e.g., billing or training algorithms) and any opt-in uses that users must agree to (e.g., additional marketing in-exchange for discounts)
  3. Valid purposes for sharing data externally (e.g., advertising or identity management), and for each of those purposes, the specific third-parties and (optionally) specific, allowed categories for each third-party
  4. Any sponsors (below) that may bring users to your platform, the rights each sponsor has to its user’s data, and any obligations to “turn off” purposes from #2 or #3 for the sponsor’s users
  5. Any discretionary reasons that users may consent to sharing specific categories of their data with third-parties, e.g., consenting to share contact data between applications or health data with family members

For #4, a sponsor is a third-party you have a contract with who brings their users to your platform. An employer, for instance, might tell their employees to use a vendor app, and agree on the back-end that the vendor can’t market to their employees. Or, a health insurance plan might offer some digital app to their members, but require that health data can’t be sold or used to train any algorithms. Encoding the terms of these contracts creates immediate transparency for your partners that obligations will be met, and as we’ll discuss, gives you competitive advantage when you talk with prospects by positioning you as a good steward of data from the start.

For #2 and #3 there are also overrides to disable a given purpose based on location and/or age of a user. This ensures, e.g., that you never market to users under the age or 13, or that you never sell data associated with California users.

You now have the framework for valid use encoded in one place. As definitions change, new contracts are signed, or requirements evolve, there is one place to track these updates, and a single source for the history of those changes. You also now have a dashboard that anyone can reference to know the complete framework. Let’s put this to use.

Component 2: User Onboarding

The only technical integration that’s required is at the time a user creates an account or logs into an existing account. At this moment, there are three API calls that should be used.

Think of the moment when you’re creating an account on a website or an app, and it says “by creating this account you agree to terms that are somewhere else.” This leaves a user wondering how their data will actually be used, which is why we see things like Apple’s “Privacy Nutrition Labels” gaining popularity. It’s also increasingly in violation of regulations. To address this, the first API call you integrate provides what you know about the user, like where they live, when they born, or whether they have a sponsor, and returns the personalized terms that apply to that user.

Because you wrote free-form descriptions for each element you configured, that’s now provided back to your dev team to put specific and personalized language in front of the new user. Of course, you’re probably not going to put every detail in-front of them, but now your dev team can easily implement a drill-in interface that is known to always provide the right language that applies directly to the individual. Essentially, this is automating the process called Affirmative Express Consent. When the user accepts their personalized terms, your dev team calls the second API, which acknowledges consent and tells Tranquil Data to start tracking the contextual knowledge about this user, including exactly when they consented and therefore which terms they were shown.

When that user comes back to the platform later and logs in again, there is a third and final API call that the dev team needs to use to see if any terms that apply to the user have expanded since they last consented. In other words, if anything from the configuration above has changed since the last time a user agreed to terms, and if that change is to some component that applies contextually to the user, and if the change was expansive, then this API call tells you what to do: acquire re-consent, notify the user, or simply let the user proceed. Hint: increasingly, you must do the first one. As part of configuration you can set default behaviors or override defaults for any given change, so you can decide when re-consent is needed, or let Tranquil Data figure it out for you.

What these three simple API calls provide together is automated acquisition of personalized, informed consent, and automated assurance that re-consent is run any time the owner of the rules decides that its needed. In most cases, it will only take a few days to integrate with the APIs because they’re defined to map to known CX points. Running a re-consent process today on its own can take organizations weeks or months, so the engineering time-savings pays for itself, and the privacy roles who configured the software know with certainty that users are being shown the right set of promises about how their data will be used. Now it’s just a matter of ensuring that those promises are met.

Component 3: Enforcement and Audit

The framework for valid use has been configured, and users have been onboarded with a personalized view into how they fit in that framework. The last step is to ensure that data use and sharing on a per-user basis is consistent with what those users consented to. Here the product offers three points of engagement.

The first two were discussed as part of the Tranquil Data™ Context Engine deep-dive: API query and database integration. In our streamlined product we support a query API, focused on the specific types of configuration (as opposed to supporting open-ended policy query as the engine does). You ask this API whether a given set of fields may be used for a given purpose, and it returns yes or no with an explanation of why that’s the case. If the purpose is not permitted, it can also return the set of fields that would be permitted. This is especially helpful if you want to give data science teams self-service validation, or if you want to automate recurring pipelines.

As a database intermediary, when data is queried our software will dynamically resolve which user is associated with any given record, and redact any fields that don’t map to categories for the stated purpose that the user agreed to. In this model security tokens themselves will define the caller and the valid purpose, so that e.g. you could give an OAuth token to a third-party to ensure that their queries only let them have the appropriate data. This is a powerful model for exposing API endpoints externally or ensuring that groups within your organization are only working with the correct data for any given purpose, and gives legal and compliance roles absolute certainty that their requirements are met.

In addition to these two engagement models, the Tranquil Data™ Trusted Flow Edition supports a third way to enforce correct use. A simple User Interface lets someone choose whether they want to use data internally or share it externally. For internal use they pick the purpose, and for external sharing they pick the purpose and third-party. Next, they select a CSV file to upload, and get back a redacted CSV that contains only the data that’s valid for the stated purpose.

This works like the previous database intermediary model, running through each row, resolving the associated user, and using the contextually applicable rules to redact fields or entire rows. In addition to the redacted CSV, a signed envelope is returned that identifies the input and output CSVs, stats about the changes, and identifiers that can be used to correlate the operation with change data capture and decision traces (below). The result is a data-set that’s known to be correct for the stated purpose across the entire population of the data. This is a great self-service capability for sales and marketing teams to stay on the rails, while providing realtime reports to legal and privacy teams that the required processes are met.

In all three of these models, and in the same fashion as the engine, a decision trace stream is output capturing hierarchical knowledge about each decision. For the streamlined product we’ve built integrations that take that knowledge, flatten it, and export it to BI tools.

a business intelligence rendering of decision trace output
A live dashboard view of Tranquil Data’s decision trace streamed into Tableau.

Let’s say you’ve deployed a digital health app that is sponsored by a number of employers and health insurance plans. As the head of legal you might want to a report that shows exactly how data is being used and shared across your organization. It might also be nice to know if there are specific purposes and/or categories of data that someone is attempting to mis-use, to understand if there’s a mis-match in configuration or merely a place where you could tighten best-practices. With Tranquil Data in your environment, and reports like this available, you get back all the time you used to spend running around checking to see if policies were being met, and you have a great way to communicate to your CEO and board why you’re on the rails when a new opportunity comes up.

another business intelligence view of a specific employee set
Another view in Tableau, pivoted off the first, to show how only (hypothetical) Google employee data is used, and showing (by omission) all of the ways that it’s never used.

Now let’s say that you’re running sales, and a big prospect is interested in bringing their members to your platform. The catch is that they have very specific requirements about how their member data is handled, and they worry you might not be able to pass an audit. Typically, a sales team can’t answer that question, and has to go to engineering to ask if the terms can be handled, which in turn causes engineering to go back to legal to ask if specific implementations are likely to work. It’s exhausting. With Tranquil Data, the sales team can answer the question by showing the kind of real-time reports they’re generating for other companies, and immediately instill confidence in the prospect that you can be trusted with their data.

How To Get Started

As we said at the start of this article, the Tranquil Data™ Trusted Flow Edition is purposefully pared down to focus on a specific flow. It defines the full framework that constitutes correct use, onboards users with affirmative consent against a personalized subset of that framework, and ensures and reports that those consents are respected. This flow doesn’t cover every platform, but in practice we’ve found that nearly all companies working with user-centric data need some form of this in their environment. That’s true from pre-seed startups to the Fortune 50.

If that sounds familiar, then get in touch with us about Early Access. We’re happy to setup a demo and learn more about the challenges you’re facing today. We’re confident that Tranquil Data “Dragons” can get you feeling tranquil about your data platforms and accelerate your growth.

--

--

Tranquil Data Blog
Tranquil Data

Tranquil Data, Inc. has built the first and only System of Record for Data Context. This blog is for news directly from the company.