Complexity: Surrender or Simplification

wikipolis
25 min readApr 6, 2016

--

More than 2 years ago, Samuel Arbesman wrote this essay on complexity that struck a cord with many and that I recommend.

The following is a recollection of abstracts.

Perhaps we can replace it (the intellectual surrender to complexity, editor’s note) with the same kind of attitude we have towards weather. While we can’t actually control the weather or understand it in all of its nonlinear details, we can predict it reasonably well, adapt to it, and even prepare for it.

And when the elements deliver us something unexpected, we muddle through as best as we can. So, just as we have weather models, we can begin to make models of our technological systems, even somewhat simplified ones. Playing with a simulation of the system we’re interested in — testing its limits and fiddling with its parameters, rather than understanding it completely — can be a powerful path to insight, and is a skill that needs cultivation (…). This could conceivably be geared towards the direction our educational system needs to move, teaching students how to play with something, examining its bounds and how it works, at least ‘sort of’.

(…) And when things get too complicated and we end up being surprised by the workings of the structures humanity has created? At that point, (…) we will have to become a bit more humble, recognize that there were bounds to what we could know. (…) We shouldn’t throw our hands up and say that just because we can’t understand something, there is nothing else to learn. But at the same time, it might be time to get reacquainted with our limits.

Complexity isn’t created equal, there is the unavoidable (Nature) and the self-inflicted. We shouldn’t waste more time and energy than it’s strictly needed dealing with the rubbles of centralized critical infrastructure systems.

As Buckminster Fuller would probably concede, we cannot simplify complex systems:

  • a systems that actively prevent simpler self-reliant alternatives from emerging while, at the same time, devaluing human capital (through the constant reinvestment of efficiency innovations into more efficiency innovations) instead of empowering it;
  • a system intrinsically unstable, in which someone’s savings are someone else’s debt (well, 97% of it) that should instead rely on clearing houses for labour and widely crowdsourced technologies (distributed ownership and trustless governance) to maximize social progress (not ROI).
  • a system in which risk is centralized but citizens’ attention is so balkanized that the resources needed for systemic self-healing can’t seem to coalesce.

We can (and should) build simpler systems, systems that strive to reduce the dependence on increasingly complex single points of failure.

See Vinay Gupta’s Simple Critical Infrastructure Maps at http://resiliencemaps.org/

Before briefly going through the potential decentralization path for each critical infrastructure (and their interplay) I want to expand the time-horizon of the infrastructure’s criticality. To drive home the point I will use a clip from Interstellar.

If we accept that education is crucial (even if neglecting it doesn’t bring about immediate Armageddon) and if we accept that the more educated people the better (“no resiliency without redundancy”) we should assume that, ideally, at any given time, each person should be able to be either educator or educated. This would have all sort of powerful implications but the most immediate one (short term) involves media. Because, let’s face it, there is an evergreen component, to education (that gets upgraded to incorporate knowledge created or discovered during a person’s lifetime) and then there is the day by day political struggle for people’s cognitive map. If anyone masters the latter long enough he can change the former in whichever way he pleases.

Critical infrastructures

  • MEDIA

More than six years ago, Jay Rosen compiled this list of sources of “subsidy” in the news production business. A sort of updated version comes from Marc Andreessen and includes: advertising, subscriptions, premium content, conferences&events, cross-media, crowdfunding, philanthropy.

Advertising is the most problematic (and, to my taste, least palatable) of them all. The New York Times, BBC, and AOL recently served up malware with their ads, causing some experts to say that blocking ads is a good way to stay secure. Most of those ads are served to people consuming the kind of “content” published by Gawker (which recently lost the first round on Hulk Hogan’s lawsuit), TMZ et similia(55% of all page views get less than 15' of attention). There’s a ad-blocking war brewing. Li Ka Shing, Asia’s richest man, invested in Shine Technology and his company (Three) recently announced that they will block ads at the network level. Shine Technology’s CEO says that advertisers use military-grade tools for tracking and gathering customer data (let alone injecting ads). The fact that he is invested in (and stands to gain from) the solution, does not disqualify his argument one bit.

All the other business models (subscriptions, premium content, conferences&events, cross-media, crowdfunding and philanthropy) rely on the same approach: get the money, produce the content. All, except, perhaps, for one: cross-media. News is a key source of material for books, TV, and film — which happen also to be growth businesses. The problem of cross-media are: who puts the money to produce the original news content, how reliant they are on cross-media for working capital and what kind of financial muscle (or deal with financiers) they have in place for the funding and distribution of the more expansive cross-media content (assuming they retain an interest along the entire value chain). That’s very very hard to pull off and virtually impossible to bootstrap.

Much easier it would be, for a community of people interested in pursuing “economic justice”, to crowdfund the litigations (among other things, of patents undeservedly “evergreened”). The media side would be nourished by the lawsuit files and would close the loop by strengthening the crowdfunding and thus the lawsuits’ success’ odds.

Many are pushing the boundaries of what’s technically possible: Buzzfeed is a (traditionally) vertically integrated company, adapted to the almost boundless technological possibilities of our times. Few (like NYT Labs) are questioning many of the underlying assumption. As far as I know, nobody is trying to…

  • Merge education and media by…
  • federating subscription-based vertical media (like The Information) and up-cycling their content into a coherent…
  • Body of knowledge (think Quora’s social proof-cum-editorial focus): context-rich explainers rather than decontextualized news.
  • 1000x smaller than Wikipedia but built on primary sources (publicly available filings: like these, only for crowdfunded patent litigations)…
  • Stored in a future proof database (with content graph queryable in VR) from which yet new content will be algorithmically produced by recombining older content to adapt it to the new context, as it emerges.

I think that the above model could be made to work. The main difficulty that I see is the access to talent. An influential media operation cannot exist without cross pollination. For that, you can have few people spending a lot of time together (like the exclusive coworking-cum-club, SecondHome) or many, spending some time in a “collective dating” context, as pioneered by BloombergBeta and, to some extent, EF and TheFamily (both with a less proactive and more “opportunistic” approach).

As high as the potential can be (relative to the bootstrapping odds) it requires a small team with preexisting network centrality achieved through public deeds rather than words (let alone private lobbying/scheming).

The main role of a new media movement should be to “dispel with the notion” that some people are inherently stupid and that you can’t teach an old dog new tricks.

Which brings us to Education.

  • EDUCATION (from centralized, hierarchical ROI to ”resiliency-through-redundancy”)

AlphaGo stunned the world by defeating, many years earlier than “predicted”, the most skilled human player. “We’re seeing dramatic progress on many fronts in AI — said UC Berkeley Prof. Russell — and it seems to be accelerating”.

Inside AlphaGo are two neural networks: one that conjectures new moves and one that evaluates those moves. This split pattern could turn out to be applicable to many other domains and could give us “machine creation” (as opposed to “machine classification”) much faster than we are currently expecting.

Speaking of which: researchers trying to divine the bizarre nature of quantum particle behavior are getting some help from software that designs counterintuitive experiments .

We are using machine learning to accelerate the rate of learning while, at the same time, the increasing complexity restricts the number of people that can produce new knowledge (social risk). The net result, though, is smaller teams able to do a lot more with a lot less, favouring a cambrian explosion in private knowledge (security risk) far outside what can be tracked by state entities. For a host of reason, the best way forward is “resiliency through redundancy”, provided that we defeat formidable foes like internalized -ism(s).

Before drilling deeper, let me clarify this: I am not merely talking about what would happen to the so-called “labour market” because of AI/automation. The acceleration forces us to try our best at “skating where the puck in going to be, not where it is today”. Today we are realizing that intelligence is replaceable and, as a species, we have yet to find an agreed upon definition of consciousness, much less whether it has any value, let alone being able to imagine how a society build around it would look like (if it turned out that consciousness actually has value).

HARARI: The basic process is the decoupling of intelligence from consciousness. Throughout history, you always had the two together. If you wanted something intelligent, this something had to have consciousness at its basis. People were not familiar with anything not human, that didn’t have consciousness, that could be intelligent, that could solve problems like playing chess or driving a car or diagnosing disease. Now, what we’re talking about today is not that computers will be like humans. I think that many of these science fiction scenarios, that computers will be like humans, are wrong. Computers are very, very, very far from being like humans, especially when it comes to consciousness. The problem is different, that the system, the military and economic and political system doesn’t really need consciousness.

KAHNEMAN: It needs intelligence.

HARARI: It needs intelligence. And intelligence is a far easier thing than consciousness. And the problem is, computers may not become conscious, I don’t know, ever … I would say 500 years … but they could be as intelligent or more intelligent than humans in particular tasks very quickly. And if you think, for example, about this self-driving car of Google, and you compare the self-driving car to a taxi driver, a taxi driver is immensely more complex than the self-driving car. (source)

Intelligence is not just (and increasingly no longer) a productive skill to be sold in exchange for food, clothing and a roof. It is the only tool through which we, as a species, can explore consciousness, imagine new ways to value it and truly build a civilization around that mysterious yet foundational element.

Whether you believe that humans will be competing with AI (with all the combination of conscious vs. unconscious and benign vs. indifferent vs. malign) or with themselves, the competitions will be fierce because any difference will be non-linear.

As of today, there are basically three “pure” ways of competing (plus hybrid): nootropics, genetics and neuroprosthetics. Despite the appeal it exercises on Hollywood’s scriptwriters, nootropics seems the one with the least clear path to success. Genetics has a far clearer one. Stephen Hsu has “floated” the idea that humans could potentially be genetically engineered to be super intelligent. Like 1000 IQ points, intelligent. As far fetched as it may seem, not all countries with advanced capabilities have the same ethical standards or the same respect for human life (pleiotropy is, and for the foreseeable future will remain, a huge risk). When it comes to neuroprosthetics, Gary Marcus (professor of psychology at NYU) and Christof Koch (chief science of the Allen Brain Institute for Brain Science) are pursuing a more established path while DARPA, as usual, is, if you excuse the trite expression, swinging for the fences.

Whether pursued alone or in combination, nootropics, genetics and neuroprotetics have all a degree of technological uncertainty attached to them (will they be feasible in a useful timeframe?), a degree of socio-economical uncertainty (will the be affordable for everybody?) and a degree of ethical and cultural uncertainty, that is: even if the earlier conditions were met, will all people undergo more or less permanent modifications of their biological nature?

Not only should we not assume it. We should actively seek out options that would allow humans to stay as close as possible to our present “form factor“ without being left behind by those willing to trade certain (and permanent) modifications for uncertain (and perhaps temporary) advantages.

Holography/AR are already used to drive down research cost but it will be VR that will provide to all humans a decent approximation of countless experiences that would otherwise be too expansive. Those experiences would provide a shared repository for analogies able to, at the same time, order, clarify and compress information. This should improve both information retrieval and pattern matching and could conduce to inventions of VR based physical experience able to further improve attachment of abstract multi-layered notions to muscle memory: a powerfully updated version of ancient mnemotechniques.

The funding of those repositories are, I believe, the most pressing issue that mankind faces, and given the fact that they could fall prey of rentiers, the way in which they will be funded is, if possible, even more crucial. This closes the loop with media, the struggle for the cognitive map and all the crowdfunding possibilites.

If humanity is to succeed, knowledge, acquired through the work of everybody, will belong to anyone. That reservoir will increasingly be tapped to provide automated satisfaction of material needs. When we, as a species, create open knowledge, we are doing more than saving. We are compounding (collective) savings at an exponentially higher rate. The difficulty in appreciating this notion (and in living by it) is that most people still don’t see the exponential growth occurring and the few that do, can’t possibly see themselves playing a role in that collective accumulation of that new currency and feel they should have a way to insure against a future that they didn’t built and fear could be taken away from them (without them even realizing it).

TELECOMMUNICATION;

In an environment of ever increasing infrastructure costs of chip manufacturing and phone assembly, Blackphone was forced to integrate Google Play. Why? Since they were trying to assemble the entire phone (not cheap) they need all the customers they can get, even if it defeat the original purpose. Project Ara could change everything.

Project ARA(if it ever comes to pass) could dramatically lower the cost of a secure phone (security means hardware security because no amount of cryptography can redeem a compromised device). How so? Well, Project ARA would allow the capital efficient rapid prototyping and testing of new components.

At that point, once the field is wide open on the component side it would be far less costly to make a move to crowdfund the manufacturing of the core platform needed to substitute we should expect to be “bugged” by Google.

By the way: it is not as if the cost of prototyping were going up. New materials are enabling unprecedented level of low cost experimentation.

In-Q-Tel, the venture capital arm of the CIA, is the leading investor in Voxel8.

Voxel8 would not be possible without the material science progresses made by the likes of Graphene 3d Lab.

Or NASA, thanks to which, we we will soon be able to use plasma to print Nanoelectronics.

If your device is secure (hardware-wise), your sim isn’t compromised, your behavior defy social engineering, then your next concern could be taken care by Tor(provided that enough people or institutions use it).

When it comes to telcos infrastructure, rural areas are easier to rally into cooperatives. No matter the shape that future’s urban internet access will take, the co-op version of Fon will always be an option.

Even more interesting (albeit not at the current price point) is Koruza. Kuroza is an open source and open hardware, wireless optical system, making the free space optical (FSO) technology available to masses and providing an alternative to Wi-Fi networks. Koruza can connect buildings up to 100 m apart with the link capacity of 1 Gbps or 10Gbps.

WATER SUPPLY (drinking water, waste water/sewage) Solar powered “water-from-the-air” (see here and here) and Solar powered toilet , funded by Bill and Melinda Gates Foundation(see here and here).

AGRICULTURE, food production and distribution.

Vertical aeroponic production (see investors and founder)

The challenge with this (as well as many other technologies) is to scale it down while containing unit costs.

Beyond meat and others are working on plant-based proteins that reproduce animal proteins’s flavour and texture.

Freight Farms, Agricool and others offer a small scale, portable last mile solution for leafy greens production.

Then there are the insects. Tiny Farms build scalable insect farms while Bitty Foods, Exo and Chapul produce ingredients, food and snacks made with cricket flour.

Equally interesting, in a world on rapidly decreasing logistical costs (see Benedict Evans’ back-of-the-envelope estimate), is Picobrew. Not just because of its first order consequences but also for the second and third order ones (which I will approach when I will write on the impact of global trade of the inexorable decline of hydrocarbons). Next steps in the decentralization process are the open sourcing of something similar to Picobrew (already somehow underway) and the dramatic simplification of tutorials’ UI (through AR).

One thing we can say. Processed food makes increasingly less sense, let alone centralized processed food. Same day delivery allows same day production. With supply chain transparency and increasingly cheaper robotics

Of course what is true for Rethink Robotics (see Salim Ismail’s video) is also true for Moley, whose cost (bundled with oven, hob, dishwasher and sink) is “predicted” to fall to $35k by 2018.

This is already a huge decline but still do not account for neither pneubotics nor for the peace dividend of “smartphones war” (aka the decline on MEMS’ price) that makes it possible to crowdsource the hands&arms’ spatial movement’s scanning of recipes (thus avoid relying on http://moley.org/recipes ).

ELECTRICITY GENERATION, transmission and distribution

The promise of solar has long been the decentralization of power generation. There are still quite a lot of obstacles but many powerful self-reinforcing loops already at play. It all starts from demand. Until recently there was no clear way, as a consumer, to demand solar energy. Then Sunport came along and successfully crowdfunded the mass fabrication of their plugs.

SunPort simply adds a small additional cost that provides the solar upgrade, which also pays to help support new solar farms feeding even more solar into the grid. This upgrade cost is considerably less than standard grid power, since it’s just for the upgrade and not the electricity itself. As an example, a month’s solar upgrade for your laptop from a SunPort will cost no more than $2 extra, and even less than $1 for many people. (source: clean technica)

The more demand for solar power, the lower the capacity factor for fossil based energy, the lower the incentives to build them or keep them running. The more demand for solar panel, the more incentives to participate in the competition for cheaper and cheaper solutions.

It used to be that a goal of 35% efficiency (for unconcentrated light) by 2050 was considered a long shot. No longer. Australian engineers already achieved 34.5%.

Then there are batteries. This report from Deutsche Bank predicts grid parity despite the advent of a low oil price era.

After Elon Musk unveiled the Powerwall (sold out, 12 months out) many are following a similar path (Orison got a positive feedback from their Kickstarter campaign).

An utilities-scale equivalent of the personal batteries is promised by Arpa-E’s breakthrough:

Ellen Williams, Arpa-E’s director, said: “I think we have reached some holy grails in batteries — just in the sense of demonstrating that we can create a totally new approach to battery technology, make it work, make it commercially viable, and get it out there to let it do its thing,”

There is now a vertical accelerator for solar power startups in a quest for all sorts of solution, at all levels of the value chain, from lowering the installation costs all the way to new financial schemes, the most successful of which remains Solar City, which recently announced a new tax equity fund to finance $249 million in solar projects.

After it is fully allocated, the fund can be doubled in size to finance a total of $498 million in solar projects. The fund covers the capital cost of solar equipment and installation, making it possible for many homeowners to pay less for the power the systems produce than they pay for electricity from the local utility.

“SolarCity has now created 50 project funds with 21 different financing partners,” said Radford Small, SolarCity’s Executive Vice President, Capital Markets. “Distributed solar’s unique combination of strong returns and societal benefits has attracted a range of corporate and institutional investors and enabled hundreds of thousands of homeowners and businesses to pay less for power generated by solar panels than they pay for power from utilities.”

And since it securitizes as much as it can, Solar City doesn’t bear the risk. Of course this strategy has a weak spot: the willingness of homeowner to go into debt to install the PV panels. Willingness that could be shaken, nationwide, buy an unfavorable ruling in Nevada where a welfare queen, (not for long?) slum lord and fossil fuel burner extraordinare like Warren Buffett is suing Elon Musk’s Solar City through is local utility NV Energy.

The “Musk vs. Buffett” showdown is only relevant insofar as the installation costs remain high enough to rely on the resale of electricity to the grid to as revenue stream to securitize the financing of the installation (which is today’s scheme but not necessarily tomorrows’).

So, we have seen the increasingly sustainable scale down and decentralization of solar power production, now let’s see if something similar could happen for wind power.

A massive Zaibatsu (conglomerate) like Mitsubishi is investing in a low cost wind energy technology like Altaeros.

Google itself made an investment in Makani Power which uses an “energy kite” to generate electricity.

The kite is light, can fly at higher altitudes where it has access to stronger and steadier winds that allow to generate more energy. A suite of sensors is used to guide the kite in a flight path that will generate the most electricity. According to the Makani, its system generates 50% more energy than traditional turbines and uses 90% less materials.

How about water? How come only solar and wind gets to be available as small scale solutions? One solution (the inland one) is Turbulent Hydro, which develops reliable, affordable and highly efficient micro-hydropower plants with no impact on the local environment (no impact on fishes).

The sea solution is ISWEC, which, as of today, is indeed a larger scale one (200 families). ISWEC is sealed inside a floating “home” that’s able to self-align to the wave main direction and its core technology is a gyroscope system that exploits way slopes to produce energy. An electronic system forecasts and monitors the wave conditions to instantly control his way and get the maximum power out.

All the above is happening despite oil and gas being increasingly exposed to tech-deflation rather than malthusian inflation. Which makes…

GAS production, transport and distribution

OIL and oil products production, transport and distribution

less useful and less relevant. It’s not “just” a matter of economics but of agency as well. Real estate based energy sources (like hydrocarbons) acts as a centralizing force and humanity has suffered under those forces’ heels for way too long.

New mineral resources (Lithium) threaten to keep the “real estate play” alive but technological progress is so tumultuous that most of those real estate holders will not have nearly as much time to consolidate their power as the Al Saud-dinasty had. The real threat to decentralization is another one, though. A researcher has discovered how to pull CO2 from the atmosphere and turn it into carbon fiber. Large (sq.km-wise), hydrocarbon producing countries with low population density could retain a privileged position. Other than that, it’s all downhill for real estate-based…

  • energy (caused by renewables + batteries), the upcoming and biotech-related decline in metals’ relevance, the
  • grains (change in diets due to vertical farming and insect based proteins)
  • meat (plant based alternatives) and to a lesser extent
  • soft commodities (given that geography is unlikely to be abolished by technology: cocoa, coffee, orange, etc) in turns makes…

TRANSPORTATION systems less relevant, especially the B2B ones.

In the short term it could happen because of Open Hardware.

We, collectively, owe a tremendous amount of gratitude to Marcin Jakubowski and the early torchbearers of Open Source Ecology and I think they will be delighted if anyone could come up with ways to speed up the process of eradicating artificial scarcity.

Getable recent pivot could mean that my previous idea of leveraging the (p2p) rental value of OSE’s machines (in order to cover the cost of the needed saturation) might have been been a wrong strategy. It is also true that Caterpillar invested in Yard Club’s latest round of funding and Yard Club still does what Getable used to do.

What Getable does, right now, is essentially insurance (that could be done p2p) plus a couple of convenience’s bells & whistles.

A a more high level effort is open-hardware’s push for low-cost lab kit.

Fifty enthusiasts gathered in early March 2016 at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland, hoping to remedy researchers’ lack of awareness about open science hardware. At the first conference dedicated to the field, they met to compare creations — and to thrash out a road map to promote the widespread manufacturing and sharing of labware. (…) They argue that sharing designs for others to adapt can vastly accelerate the progress of science. But this share-all do-it-yourself (DIY) philosophy is yet to become mainstream. “The majority of scientists are still waiting to get involved,” says Joshua Pearce, an engineer at Michigan Technological University in Houghton, who two years ago published a book for scientists on how to create a low-cost lab.

We are already buying less stuff, built with less material which is also increasingly recycled.

The end game is material that isn’t just recyclable but can be created by encoding information into locally grown biomass.

Of course, the less material a country consumes and the more arable land it has, the higher the probability of being self-sufficient in atoms.

Furthermore, VR is poised to transform travels from a “must” to an “option”, among others.

“Trade in bits, self-sufficiency in atoms” brings us to the many (potential) consequences on financial services.

Finance is not (necessarily) about money. Finance is about exchanging value and getting something immediately but paying it back overtime. Today, the only way to pay it back is through debt-based money earned through the performance of intelligence-based tasks the demand for which is unrelated to the willingness of the debtor to repay his debt. How store-of-value, unit of account and medium of exchange came to be bundled (let alone in a “fiat” and “debt-based” currency) is beside the point, here. What matters is that the bundle is socially toxic and there is no reason for it to continue.

Whether it begun as pure barter (predicated on a double coincidence of needs) or as community based system of equivalences (quasi-units of account: a sheep is worth more or less like a goat, a horse more or less like a cow etc), trade was immediately about mediums of exchange, things that had value. There is a time dimension and a space dimension, to trade. Finance address both. The first (maturity fruition’s mismatch) is solved with either interest, partnership or (in small, tightly knit communities) reciprocity (p2p). The latter (counterpart distance) is about trust and is dealt with simultaneous trade settlement between goods and their surrogates (widely accepted currencies).

If we had the power to change the underpinning of the financial system, which unit of account would we pick? Historically gold became unit of account on the back of the role as store of value that throughout the centuries it has been attached to it. Given that sooner or later technology will fulfil alchemists’ dream at an affordable price, it make no sense to build a new financial system with an expiration date. The Labour Theory of Value, even if we were to rebuild it integrating the marginalist’s critique, also makes little sense given that, long term, jobs will be for machines.

The unit of account should probably be energy, watts. You could take a country’s renewable energy generation capacity, divide it for its financial wealth. That would determine the conversion rate between a unit of currency and a unit of energy. Energy can be exported (through either the grid or, if stored with enough density, through commercial vessels) or exchanged to pay for imports.

A blockchain-based monitoring of the renewable energy contribution to the grid could allow/force central banks to peg their new, freedom enhancing toys (centralized cryptocurrencies) to have an intrinsic value without being constrained in its supply (can always add capacity) and without having countries that win the geological lottery and others that can chose between working for the former of make war to them. Energy based currencies would probably not be inflationary. An even if they were, they would still be useful as medium of exchange.

There should be an even more powerful way for people to “mint” their own money. In fact, there is. It’s just one that have (and for the foreseeable future will have) trouble working at long distance. I am talking about Sardex, a Complementary Credit Clearing System that is finally receiving the appreciation it deserves.

Sardex can be made more liquid using Blockchain (the obvious tool is escrow), its value chain more transparent using Tradeshift-like tools and financially healthier using C2FO whenever there is an interaction between actors within the circuit and actors outside of it. Tradeshift and C2FO are already integrated.

Whenever complementary currencies/mutual credit and collaborative cash flow optimization will be insufficient we will use venture debt (preferably a crowdfunded version of it) unless venture capital is both preferable and available (again, crowdfunded or otherwise). If crowdfunded, either in hard currency, complementary currencies, appcoins or a combination of the above.

P2p insurance already exist, the next logical step is equity crowdfunded insurance companies that, while accepting laws & regulations, take upon themselves the compliance risks, vis-a-vis the State. This would speed up society’s progress without hampering the already quite diminished role of the State, quite the contrary.

Long term, trade in bits (zero marginal cost) or in time/knowledge will become prevalent. Given the preference for “minting our own currency” (labour), there will be a great deal of collective work towards making local systems interoperable. Perfect trade is an ultra complex optimization problem. The operating system in charge of “human discovery” should know the originator of the request, understand the requests and, in light of those, provide a manageable number of potential partners while giving users the means to interact in a quick but rich way with them before the final decision. This isn’t happening anytime soon. What does it mean? That network centrality (the ultimate prize) will continue to accrue to some people that don’t deserve it while evading some that do. It also means that proximity (and richer interactions) will determine how much a person can contribute and who she will work with (and why). Solving the above is about machine learning applied to professional Actions (more in the next paragraph).

In the broader sense, financial services presuppose (at the very least) segregation of the value stored and fraud detection in access to it (plus a plethora or other instances that ideally should and, perhaps realistically, could be reduced, in a less complex society, but never completely eliminated). Segregation and fraud detection presuppose solving Identity (the attribution problem: I am who I say I am, therefore I am entitled to x). Identity is fraught with all sorts of risk (masterfully explored by Vinay Gupta here). Suffice to say that we should privilege a separation between (the currently prevalent) Profiles (a set of opinions that somebody has about me) and Actions (a log of the places where I have made choices, typically key life decisions).

Action-based identities are very, very rigid things, and represent (for a good number of people) a credential nearly as hard as a State ID, and with considerably better privacy implications in many cases.

In a bit-based trade, many of the problems the solutions of which shaped finance as we know it, would not be there. What do I mean by that?

This:

[Michael] Hudson’s next job was with Chase Manhattan, where he used the export earnings of South American countries to calculate how much debt service the countries could afford to pay to US banks. Hudson learned that just as mortgage lenders regard the rental income from property as a flow of

money that can be diverted to interest payments, international banks regard the export earnings of foreign countries as revenues that can be used to pay interest on foreign loans. Hudson learned that the goal of creditors is to capture the entire economic surplus of a country into payments of debt service.

We can only speculate regarding the extent to which consumers will become owners of production robots. Even if the prosumerisation were to stop at the current level, the only impact of trade-in-bits(only) would be on the balance of payment (for more on that, read: https://medium.com/@wikipolis/ce92408a403 )

The impact of technological deflation (and decentralization) on public health are, if anything, much needed, albeit (because of regulation) all but straightforward.

Given that this is a broad strokes overview I will only mention two developments at works in one of the most expansive (among the broadly used) medical performance: MRI.

Drawings submitted by Butterfly Networks for its patent filing.

On the one hand of the spectrum (industrial-grade, centralized) we have Duke University researchers who have discovered a new form of MRI that’s 10,000 times more sensitive and could record actual biochemical reactions (such as those involved in cancer and heart disease) : in real time and for more than an hour. It works through a new class of molecular “tags” that are biocompatible and inexpensive to produce, allowing for using existing MRI machines.

On the other hand of the spectrum (decentralized, prosumer) Butterfly Network is trying to reinvent the ultrasound machine by squeezing all of its components ​onto a single silicon chip. At this stage it isn’t clear whether Duke’s molecular “tags” could work with Butterfly. One thing we know is that augmented reality technology like Daqri allow lower skill workers to perform

highly specialized, multi-stage tasks while reducing the error rate, even with less trained and qualified employees.

What works for ultra-regulated sector like civil aerospace (Boeing) should work for health care, too.

This is just an example of the forces at play. Cheaper machines reduce the advantage of big ticket centralized hospitals (€100–150mln).

The vision for this product has been around for many years. Despite a decade of interest by companies including General Electric and Philips this technology did not function reliably and have proved difficult to manufacture.

“It remains to be seen whether someone can make it into a market-validated reality,” says Richard Przybyla, head of circuit design at Chirp Microsystems, a startup in Berkeley, California, that’s developing ultrasound systems that let computers recognize human gestures. “Perhaps what was needed all along is a large investment and a dedicated team.”

“The ultrasound [industry] is basically back in the 1970s. GE and Siemens are building on old concepts,” says Charvat. With chip manufacturing and a few new ideas from radar, he says, “we can image faster, with a wider field of view, and go from millimeter to micrometer resolution.”

Given their role, security services intersect all critical infrastructure that they have to protect. It’s quite natural that, if infrastructures were to be made “non-critical” their role would change a lot. I don’t think it’s a sensible to idea to go into much details, here. Suffice to say that high tech military training should become a rite of passage. Of course not a State-mandated one but one that provides a desirable and sought after status. In a decentralized society, the new territorial distribution increases both the need of and the appetite for self-defense. A world with a negligible number of points of failure is a world free from terrorism and, therefore, from the heinous trade offs that we have been asked to do in order to protect our values and our way of life.

--

--

wikipolis

Corporate Nationhood (TPP/TTIP/TISA) means (among other things) that Product & OS designers/builders are the new Statesmen