California Consumer Privacy Act (CCPA): Impact on Data-Driven Innovation
On February 20, 2020 The Hive hosted a panel discussion at swissnexSF on California Consumer Privacy Act (CCPA) and the impact on Data-Driven Innovation. Our panel was lead by privacy experts which included Lydia F de la Torre (Professor, Santa Clara University & Counsel, Squire Patton Boggs), Stephanie Adamson King (VP, Deputy General Counsel, Product & Privacy, Twitter), Andy Roth (Chief Privacy Officer, Intuit), Michael Hauser (Global Privacy & Data Protection, Pinterest), Joseph Ternasky (Former Director, Privacy and Data Use, Facebook), and Robin Andruss (Director of Privacy Governance, Twilio).
This robust panel discussion was captured in the summary below and can also be watched on The Hive’s YouTube channel.
What is the CCPA? The CCPA (California Consumer Privacy Act) began as a ballot initiative that aims to impose some restrictions on the ability of entities in the private sector to collect, use and share consumer’s private information. It eventually resulted in the enactment in 2018 (unanimously and within one week) of the CCPA in exchange for the withdrawal of the initiative from the ballot. It shares some commonalities with the EU’s GDRP (General Data Protection Regulation) which is the most recent data protection Pan-European framework, which was initially conceived of in the seventies.
Compliance with data laws can be challenging as organizations are often uncertain as to how old frameworks apply to new technologies and may lack sufficient guidance as to how new frameworks affect them. Indeed, uncertainty is a theme in this space. De facto, today’s companies are expected to apply existing laws and new laws to technologies, even though some regulatory developments aren’t finalized (e.g. the rules on CCPA are still in the process of being finalized), while some form of enforcement may already be taking place.
Currently, a ballot to amend CCPA is being circulated and expected to pass, which brings more uncertainty. Indeed, CCPA 2.0 (formally known as the California Privacy Rights and Enforcement Act [CPREA]) is gathering steam. Arguably, it proposes rational things related to location info, biometric info, etc.
One of the glaring challenges/points of confusion within the CCPA is the definition of “sale”, which is broad which makes it challenging to pinpoint how to translate the requirements into real-world application.
Many key thought leaders in the space believe privacy is a fundamental human right, which echoes public opinion, and is being captured within privacy laws.
Automated Decision Making & Prediction Models
Just how much data is being recorded? Massive, massive amounts. Between sensors, biometrics, geolocation data, etc. companies are able to create a robust profile about a user. This makes many typical American users uncomfortable. However, it’s this same principle that allows personalization that so many of us enjoy today, such as recommended products like while shopping on Amazon or streaming entertainment content on Netflix.
But people are wary of personalized data. The greatest fear is manipulated behavior without the user being aware. There is a legal concept of “purpose specification”, which aims at fixing the problem so that data is not used in a way it was not originally intended to be used. Indeed, when data is collected by one product but then used by a second product, people may feel tricked. Under CCPA, companies are now required to acknowledge in advance just how the data will be used- and use it only for the specifically stated purposes.
The CCPA Cross Context Behavioral Advertising rules aim at guaranteeing that if people want to opt-out of targeted advertising they can do so seamlessly across all devices and related platforms. Especially when it comes to cloud computing, CCPA forces organizations to analyze every data share and interaction across the enterprise; locally, in the cloud, and at the edge.
Challenges and Opportunities
What are some of the challenges working under these new regulations? Without a doubt, how to best allocate company resources. It is a challenge when you’re a global company and you’re trying to comply with all these regulations at once.
How is regulation in general impacting AI? Companies have to be careful about what they do with the data in the AI space because you can blur the line between the data controller and the data processor. ICO Guidelines are a framework, and more frameworks are coming out. An additional challenge is to make sure AI is fair and accountable. Explainability is difficult to achieve in this area (Alpha Go is an example). It is expected that AI systems in the future will have a feature built-in where if there’s a refusal or a move made by AI it provides a written, human-understandable explanation. New privacy regulations raise the bar and it’s larger companies that have the advantage according to some experts, but others disagree.
In some instances, instead of being a roadblock, innovation and rapid advances in technology are driving regulation. Companies that were once against regulation are now embracing comprehensive, sensible regulation. The goal is to set a clear compatible set of rules for companies in a way that sheds much-needed clarity on privacy for users. There has certainly been a cultural shift in this direction over the last several years.
How should we approach regulating data? 50+ individual states makes regulation very complex, which is why having federal legislation is now a widely accepted and desired solution. The more complicated and complex the regulations, the more likely for bugs to get in the code. Patchwork legislation doesn’t reflect how companies operate or people operate; people move. Companies are told to make privacy policies very simple, yet also told they most require all these complex rules contained within.
A key point of advice is that the key to implementing regulation well within your company involves committed buy-in from leadership. It also requires both a top-down and bottom-up approach to rolling out new policies. Some companies best handle this by creating “privacy task forces” that employees from any department can volunteer to be a part of, which continually asks, “Is this in line with how we treat privacy? Is this in the best interest of the end user, from a privacy standpoint? Does this protect their data?”
Streamlining common-sense approaches to regulating data and enforcement (that take into account how humans behave in everyday life and how laws are upheld), coupled with robust strategies to build a compliance program (leadership buy-in, company-wide understanding, and enforcement) and relevant frameworks as a path to convergence, will make privacy not only more enforceable from a legal standpoint but will foster a better user experience. In addition, privacy tech (both startups and in-house projects) will accelerate the rate of adoption for technology companies and will continue to foster innovation.
When Lydia F de la Torre isn’t providing counsel or teaching classes on privacy and data protection, she is contributing thought leadership to the privacy space through her blog, Golden Data.