I’d agree that you do not need a precautionary principle to protect you where no harm can be done.
It is there to save us from (very) harmful threats that would earns us a Darwin award or socially going the way of the dodo. Something like Fukushima.
Application of the precautionary principle could have been to not build the nuclear reactors at all near a fault line, or to build them in such a way that any unusual disruption cuts the nuclear reactions short — e.g. making intelligent and innovative use of the very watery environment (if that’s possible).
If you’d innovate in something like self-driving cars, you would need probably less, but still some ‘permission-based’ safety rules to apply until safety of the self-driving item is established. There’s good reason that you start testing on some non-public grounds. As I remember google car despite hiccups was and is running on real life streets in the US — so where is the issue?
I am wondering whether those exposed to Heroin as cough sirup decades ago would not have welcomed more testing and a precautionary principle in applying this medicine to them. So does this speak for permission-less innovation?
From Thierer’s preface:
“The central fault line in technology policy debates today can be thought of as “the permission question.” The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”
Now, first a liberal principle: freedom stops where the freedom of another human being is concerned. There might be contention about where this line really is and where it is to be drawn in a large area of overlap as well as how it is to be ensured. Fine. That’s why we need rules.
Secondly, there is an important difference between development and deployment. These two steps — or many more according to the old ‘chain linked innovation model’ as you keep iterating and learning — is seemingly wilfully confounded, because it is understood that development entails risks using unfinished ‘products’. Development is where you straighten out your technology, your approach, your system — not to make a fool of yourself when you start going after customers (*not* alpha testers) in ‘deployment’ when you go for innovation (also known as social acceptance).
I hope you understand that your development should not negatively affect me until you have a safe-to-use prototype / product. For instance, I don’t want the developer of a drone-based delivery service for amazon interfering with the Airbusses starting and landing sometimes over parts of my city. Now, if your drones can safely be used because they cannot fly high enough to get into the path of planes and cannot enter the space around the airport, let’s do it.
Thirdly now, if you develop a new filter for instagram, or even instagram itself— go ahead.
As an innovation-aficionado and business economist dabbling in our local Cologne entrepreneurial scene, I haven’t seen that this permissionthing is a big issue in entrepreneurship and innovation at all. Entrepreneurs innovate products and processes and sometimes also innovatively stretch the rules, but the real constraint in innovation is the ‘market’: e.g.
- Does your ‘product’ add value?
- Can I introduce your new software in my hospital without it threatening to disrupt my processes?
- Is it really so user-friendly as to make my busy nurses or doctors use it?
- Is it cost-efficient?
If so, fine. That reminds me of a statement by a young economist here:
“How self-organisation works: the Smith-Hayek theorem demystified
We’ve seen elsewhere how individuals react to their social position on the basis of their psychology: making tradeoffs or applying rules, being subject to salience, chains and anchors and the persuasions of others.”
