Scanning a sick planet
Ari Gesher is one of the founders of Kairos Aerospace, which uses a mix of cloud data-processing, aerospace sensors built from off-the-shelf hardware, and old-fashioned scientific research & development to transform methane sensing and emissions monitoring. An unstoppable polymath, Ari has a fascinating background in data privacy, algorithms, and community development.
We caught up with Ari to find out more about how best to balance privacy and safety; what we can learn about our planet through creative use of emerging technologies — and good old fashioned ingenuity; and of course, his upcoming talk at Pandemonio.
Pandemonio: You’re analyzing methane emissions from satellites. What other kinds of pollution or global effect are likely to be transformed through advantages in machine learning and imagery?
Ari: I think we’re really just getting started in this space. Some of the most simple computer vision one can do is color histogramming — counting the number of pixels of each color in an image. It turns out that by just measuring the mix of colors in a satellite photo, you can get a good proxy for the level of poverty in an urban neighborhood.
Today, things like CO2 and methane emission are mostly estimated through top-down calculations of emissions based on proxy measurements like fuel and energy consumption or composites of various industry estimates. However, the new bottom-up methods of measuring actual emissions at their source seem to indicate systematic under-measurement of emissions from top-down models based on large, undetected sources that are not accounted for in their estimates.
This bottom-up revolution is important since it delivers the true signal of a problem, hidden variables and all — a necessary ingredient for the type of algorithmic policy-making envisioned by people like Tim O’Reilly, policies that directly connect action to effect in way that has never been possible before.
P How do government regulators need to change the way laws are passed in an era of abundant data?
A I love the idea of letting regulations help markets work better — by helping to establish baselines for things like public pricing that reward efficiency and punish laggards.
I do think the regulatory agencies of the future should basically be a mix of privacy engineering and data science teams. Running regulatory agencices should be about standards of data interchange and the speed, veracity, and accessibily of the reported data. Once you have good data about the problem, rule-making can become transparent and accountable.
And from a policy goals perspective: only by analyzing the performance of regulations can we know if they’re really acheiving their policy objectives or just creating undue burden and friction in the marketplace.
P Looking at the VW emissions scandal, it seems like regulators now need to peer inside the “black box” and regulate the algorithm. How will their skill set change?
A I think peering inside of the black box is way too hard to do. Instead, I think algorithms should be regulated with third-party audits. Performed on a regular basis, the performance and bias of algorithms can be monitored over time. Similar to third-party financial audits that are a requirement for public companies, black-box algorithms can be monitored by teams that look at input data, algorithmic output and measure that against stated performance goals.
In the VW case, that would be submitting randomly chosen cars to the sort of testing that found the emissions anomalies in the first place — as a matter of course. The job of the regulator is to outline the things to be measured and make regulated parties self-perform audits through professional, accredited outside auditors.
P How do we balance a right to privacy with the tools that can protect us from threats?
A On a practical level, systems that hold data about people need to do so responsibly. Step zero is security — if data is not held securely, then no controls placed on its use are meaningful. Once the data is secure, the first step is really minimizing access to only the data that is needed to do the job. The second is to make sure that all use of that data is accountable and auditable. And the third is to have active oversight of its use to spot patterns of misuse as soon as they occur.
On a more philosophical level, we need to realize the existence of these modern capabilities can be a threat — due to regime change, or mere policy change, the systems designed to protect us can be turned into the infrastructure of authoritarianism.
In the end, the balance to be struck must be in the form of accountable policies. The technology only serves to guarantee that the policy mandates of what is allowed and what is off limits can be meaningfully enforced.
P You’re pretty vocal about being an autodidact. What kinds of jobs are well suited to people who learn as they do things, rather than in more traditional ways?
A Learning as you do you things exposes you to the rich complexity of reality rather than some distilled version in a textbook. In that richness, it’s possible to spot patterns that aren’t there in the abstractions.
Any time you’re trying to do something novel — invent the future — this serves you doubly. First of all, there is no textbook to consult, so you’re not missing anything. Secondly, by experiencing the complex reality of something you can see patterns and connections that are not obvious at a more abstract level.
Today, much of invention is about composition — the connecting of already built artifacts into some larger, meaningful whole. What to connect, how to connect it, and how to translate between seemingly disparate domains — something autodidacts excel at — becomes a very useful set of skills,
Or to put it another way — you end up building a bunch of cool stuff because you never taught that it wouldn’t work. ;-)