The future of free speech (Part I): The Special Internet Standard and its Independent Arbitration System

The panic button and the legal pillar of social media regulation

Anthony Bardaro
Adventures in Consumer Technology
21 min readApr 15, 2019

--

👆Check out the easiest way for you to get informed or inform others 👆

Last week, Ben Thompson built upon a spectacular body of work with a new post on Stratechery entitled “A Regulatory Framework for the Internet”:

[My] regulatory framework that looks like this… “Free as in speech” is guaranteed at the infrastructure level, the market polices platform providers generally (i.e. “free as in puppy”), while regulation is narrowly limited to businesses that are primarily monetized through advertising (i.e. “free as in beer”) and thus impervious to traditional content marketplace pressures.

Frameworks like this are foundational building-blocks for both business strategy and policy implementation. After all, we need not look further than antitrust experts, journalists, and politicians to learn that almost everyone fatally misunderstands Web 2.0 businesses from first principles — particularly how the nouveau riche are juxtaposed with traditional industry.

Part-and-parcel, it’s important to reiterate the flaws of both the prevailing populist sentiment and its resulting policy proposals — something a lot of us have been doing a lot of lately.

‘Do something, anything’

Yet, while we’re all still dabbling in philosophy and theory, there’s an increasing sense-of-urgency for those in-the-know to start advancing some actual, workable solutions, because political momentum is building to ‘do something, anything’ to fix these real societal ills — even were the outcome tragically flawed. In other words, academic frameworks and critical punditry can only get you so far; at some point, the rubber has to hit the road and you need to make-the-leap from theory to practice.

The need for such policy prescriptions to address the visceral pain of tech’s principal-agent problem produced “The Chinese Wall Solution” in 2018.

In the meantime, a more somatic pain has seized the social web, with symptoms like misinformation, cyberabuse, and censorship triggering an autoimmune response. Given the state of discourse around social media and social networking platforms today, I want to address a few specific diseases here and now.

Returning to Ben’s framework, excerpted above, he was clear to divide web businesses into three distinct buckets, including web infrastructure, traditional web platforms, and ad-monetized platforms. That general taxonomy of web constituents was as far as Ben was prepared to go, admittedly leaving us with more questions than answers, per his own conclusion:

This framework, to be clear, leaves many unanswered questions: what regulations, for example, are appropriate for companies like YouTube and Facebook? Are they even constitutional in the United States? Should we be concerned about the lack of competition in these regulated categories, or encouraged that there will now be a significant incentive to build competitive services that do not rely on advertising? What about VC-founded companies that have not yet specified their business models?

I want to push the dialogue a step further, by honing-in on that cohort of ‘ad-monetized platforms’ and proposing an actual regulatory system — beyond this basic framework.

Herein, I’m going to flesh-out a proposal about making producers and consumers liable for their roles in propagating content that results in a material cost to society. That sounds scary and, frankly, a bit backwards. For instance, our reflex is to put the onus on the platform itself, as opposed to the humble user, since the platform ‘makes so much money’ (😬) and represents a centralized bottleneck that may thus act as policy’s neatest transmission mechanism. However, the scale here makes it impossible to police ex ante. It structurally requires an ex post approach, and, as such, the extensibility of deterrence, which I’ll discuss shortly, is the best stick I can fathom for the carrot of a better web…

Independent arbitration system

The earliest iteration of the consumer internet, Web 1.0, was know as the “read web”. Its eventual successor, Web 2.0, added social elements, like wikis, that allowed regular users to not only consume information, but also produce info with far greater ease than ever before — hence its nickname as the “write web”.

Web 2.0: Turning the information superhighway into a two-way street

As a result, the information superhighway became a two-way street. When we talk about regulatory approaches to the internet, we so often focus on either the highway authorities (platforms like Facebook/Google/Twitter) or the outbound cargo trucks (publishers like The New York Times/Washington Post/BuzzFeed/Breitbart), but we ignore the inbound passenger vehicles that account for the preponderance of traffic — consumers themselves who create content on the social interwebs.

While verboten for reasons I’ll navigate in a moment, an effective regulatory approach must also account for this user-generated content. I propose a holistic “arbitration system” as follows:

  1. Establish new, independent, autonomous arbitration boards funded by an excise tax on free, ad-supported, web and/or app platforms that are both fueled by user-generated content (UGC) and predicated on network effects;
  2. Content like misinformation and free speech issues aren’t Constitutionally admissible, but corrosive trash like terrorism, child-exploitation, and some cases of abuse would certainly be in-play;
  3. Arbitration boards would have a broad set of permissible penalties to apply to those parties found guilty of infractions, ranging from civil penalties like monetary concessions to criminal referrals up to a proper court — all subject to appeal, of course;
  4. Arbitrators would also rule as to the treatment of a guilty party’s related digital content, including whether or not platforms should take-down a specific post/many posts; suspend the user; etc.
  5. Monetary concessions would get allocated toward covering victims’ damages first and foremost, with additional provisions to provide supplemental funding for these arbitration boards (especially in cases involving a “tragedy of the commons”)
Take out the trash

It’s important to grok the reality of such a proposal: The overwhelming majority of trash will still fall-through-the-cracks — that’s unavoidable given the scale here and its potential strain on any robust system of oversight — but the mere threat of litigation/prosecution is significant-enough of a deterrent to take a meaningful bite out of this trash’s aggregate stock and flow. (e.g. You still have moving violations for motorists even though 90% of infractions go unfounded, but the threat of a traffic ticket is enough to keep most people in-line most of the time.)

Would New Zealand’s despicable Christchurch terrorist still have done his deed in this proposed regime? Certainly, but the looming threat of litigation would’ve made many those who propagated his trash think twice about their role — and they’d be taken-to-task in proportion to their contribution.

Arbitration, not litigation

The choice for arbitration as the means of troubleshooting these issues is an important detail. Arbitration is quicker and more cost effective than civil court, which is a necessary double-threat given the volumes of throughput this system would have to deal with.

Carving-out a separate arbitration system dedicated to digital platform cases would both establish a self-funded operation with specific revenues and avoid straining the preexisting legal system that’s not designed for the digital realm.

That system should be able to dynamically adapt too. For example, if Facebook had a 15% share of cases referred to the arbitration system in which wrongdoing was confirmed, then Facebook’s excise tax would adapt in the following fiscal year — increasing or decreasing in proportion to that financial footprint left by its externalities. (You’d probably want to include the administrative cost of keeping open cases on anonymous users too, but more on that below.) It’s a pretty fair formula for quantifying the societal cost, and it incentivizes Facebook, et al to innovate as a first-line-of-defense.

The Constitution and First Amendment

The Supreme Court has historically done its best to establish a bright-line between the permissible and impermissible broadcast of violence, as discussed in detail below. There’s a big difference between reporting the news and propagating its gory details. Why does any private citizen need to see or share such footage — especially in the public forum? (N.B. The purposes of law-enforcement and a few other narrow cases like artistic expression are notable exceptions, but even still, “the public forum” is necessary in so very few of these circumstances.)

Leapt far ‘cross that line in the sand

By virtue of UGC, common users are broadcasters on these platforms, and while a lot of other content beyond my purview toes-the-line, most abuse/terrorism/child-exploitation leaps far past it. But, in the cases where there’s justifiable and/or subjective context, that’s what the proposed arbitration proceeding is supposed to suss-out, as a first-line-of-defense, acting like a gatekeeper for the formal court system, if necessary.

Furthermore, the path-of-least resistance for this policy implementation is to drop these platforms and their users under either direct-FCC or pseudo-FCC oversight, then subject them to a special internet standard: Instead of the FCC’s current programming standards against “obscenity/indecency/profanity”, this new system would start with “abuse/terrorism/child-exploitation” for such internet content broadcast in the public forum.

To wit, the following, preexisting FCC mandate sounds a lot like the powers bestowed upon my proposed arbitration system — except I proposed establishing a dedicated, self-funded arbitration system rather than empowering an omnipotent government agency like the FCC (emphasis mine):

Congress has given the FCC the responsibility for administratively enforcing the law that governs these types of broadcasts. The FCC has authority to issue civil monetary penalties, revoke a license or deny a renewal application. The FCC vigorously enforces this law where we find violations. In addition, the United States Department of Justice has authority to pursue criminal violations.

I used to have big problems with extending FCC powers to oversight of the web, due to issues like the aforementioned concepts of “free speech”, “strain”, and the “public purse”. But, the structure I’m proposing resolves a lot of that; is open to a new regulatory body; and starts with specific infractions: terrorism, child-exploitation, and abuse.

The First Amendment is concerned with protecting the liberties of the populace against an oppressive government that would seek to squash those rights in its own self-interest. I don’t see a huge overlap between the government’s self-interest and abuse/terrorism/child-exploitation. Although more and more instances of terrorism are being gamed opportunistically for political means, the proposed arbitration system is theoretically an independent, autonomous body with recognized powers. (More on the importance of that below.)

Plus, the Constitution doesn’t protect expressions that have, in the words of the FCC itself, a “clear and present danger of serious, substantive evil”, which again covers the trash I’m discussing herein. (I’ll clarify this point later.) Others constructs, like violence, are far less black and white:

[T]he FCC does not currently regulate violence on television. While the FCC issued a report in 2007 on the effects of violence on television in which it implored Congress to create regulations to rein in violence on television, to date no such regulations have been created granting the FCC regulatory power over the broadcasting of violence.

Instead, decisions about the appropriateness of violence on television are left to each network to self-regulate and to each parent to monitor.

And, when it comes to the actual content mediums under FCC purview, it would appear that “interstate and international communications radio, television, wire, satellite and cable” are all nominally within its jurisdiction, but this too is nuanced — depending on the transmission mechanism:

Because obscenity is not protected by the First Amendment, it is prohibited on cable, satellite and broadcast TV and radio. However, the same rules for indecency and profanity do not apply to cable, satellite TV and satellite radio because they are subscription services.

Tragedy of the commons

Crucially, that limitation-of-powers is not only due to cable and satellite being subscription services (e.g. HBO or SiriusXM), but also due to their being generally beyond the government’s reach in the first place. For the most part, the government only controls over-the-air (OTA) broadcast mechanisms, like wireless spectrum, to avert a tragedy of the commons since spectrum is open, scarce, and thus susceptible to overcrowding. In contrast, the capital intensity of satellite and cable infrastructure is — at least for now — a natural barrier-to-entry, obviating the need for centralized rationing.

In sum, the FCC can and does Constitutionally prevent the broadcast of obscene material across all mediums. The FCC also likely has the legal grounds to prevent the broadcast of violence across those mediums too, but Congress has discretionarily opted for a self-regulatory regime instead. The government has less jurisdiction over indecency and profanity due to Constitutional free speech liberties. Combining that with the fact that most non-spectrum-based vehicles (like OTA) aren’t open access resources, the content delivered by most transmission mechanisms is also exempt from FCC oversight.

The point is: By both the letter-of-the-law and historical precedent, Congress has the discretion to regulate constructs like abuse/terrorism/child-exploitation; and by virtue of being a common pool resource now in 2019, the internet itself, as a transmission mechanism, qualifies for government oversight. (More on this later too.) In other words, my proposed special internet standard and its independent arbitration system are compliant with American democratic law.

The least bad option

At this juncture, I want to revisit my thesis from the preamble above, originating some three years ago, that a burgeoning anti-tech popular sentiment would evoke the political will to act. From 2017’s “Antitrust Is Tech’s Endgame”:

Why will law-makers pursue such means? Anytime a sector’s margins are this enormous, anytime there’s a perception of excess, anytime the winners are winning so bigly, the populist sentiment turns against them. Tech startups have scaled exponentially, and unfortunately, public sentiment will degenerate just as quickly. So, much like a tech startup, Congress itself will start with a problem (monopoly power), then find a solution (antitrust modernization).

To wit, I’m not proposing that these internet platforms or these arbitration solutions fit under the FCC’s preexisting purview. Above, I specifically said “pseudo-FCC oversight [with] a special internet standard”. So, I’m proposing that they get put under a new system — spun-up in much the same way and for much the same reason as the FCC itself.

Mob rule

I have a trackrecord of opposing strongform regulation due to inevitability of false positives, externalities, unintended consequences, etc — among other inextricable risks. But, again, as I said above: “there’s an increasing sense-of-urgency to start proposing some workable, alternative solutions, because political momentum is building to ‘do something, anything’ to fix these societal ills”.

I’m aware of both sides of this argument — nay, I’m aware of the entire spectrum of opinion. But, I can’t eschew the tipping-point we seem to have reached this year. It’s not so much that the rest of the world has caved to populism and implemented their own asynchronous laws and regulations; it’s more that the US mob has reached sufficient critical mass to set-the-wheels-in-motion here domestically. The people, the media, the politicians, and now the industry itself are all on-board with action-for-action’s-sake, ranging from nihilistic to bureaucratic leanings, with a lot in between.

I’ll keep repeating the same (IMO) realist points-of-view on these things, but I see legislation coming-down-the-tracks, so I’m now more concerned about fashioning a workable solution — instead of choking on the rubbish being proposed by a lot of those who are in and out-of-the-know so far to date.

Fortunately and unfortunately, a (representative) democracy often requires compromise. Plus, complex systems like these are infinitely and impossibly, well, complex. As such, this proposal is most certainly an imperfect option in its current state. It will certainly remain imperfect even after continual iteration. But, like democracy and capitalism themselves, the arbitration system may be the worst solution for these problems — except for all other options.

The anonymity problem

Of course, anonymity is a bit of fly-in-the-ointment here, but a few considerations about that…

The mobster’s pall of perpetual paranoia

First, cases could still be brought against anonymous users; those charged will have to live with the burden of having an open-case against them, knowing that they could be rooted-out and brought-to-justice for the rest of their natural lives — like an organized crime boss under the pall of perpetual paranoia.

Second, the party bringing charges against an anonymous account could resort to charging those real-identity verified accounts adjacent to said anon — those culpable in the propagation of his/her alleged harm. For example, “adjacent” could mean a user or class of users who liked, retweeted, or shared a post that’s being charged — thereby contributing to its propagation. Infractions as simple as ‘you retweeted terrorist content’, would likely be class actions (e.g. charges brought against all of the retweeters as a class instead of each separately/individually); and the penalties for those found guilty should be proportional to the deed. (e.g. As patient zero, the Christchurch shooter would obviously be liable for far greater penalties in relation to the propagation of terrorist content, in isolation, than the subsequent users who distributed it.)

Third, if presented with a case that details the charges against an anonymous user, the platforms themselves would have basis, established by independent, 3rd party inspection, for taking enforcement action against said anon — like limiting distribution or suspending affiliated accounts. Indeed, this could be taken as an unsavory precedent of presumed guilt rather than presumed innocence, but that may just be a right these accounts have to cede in exchange for the renumerations of anonymity. Furthermore, note that the content won’t get taken-down or deleted until a (guilty) verdict has been announced; instead, its amplification would get thwarted and its perpetuator(s) locked-out if deemed necessary by an adequately-informed platform. There’s that whole “right to free speech but not to distribution” thing, which seems even more reasonable when applied to anons. (N.B. I am aware of and agree with [most of] the studies that conclude anonymity isn’t necessarily problematic vis a vis the erosion of online discourse, but anons do present a problem for real world enforcement.)

Piercing-the-veil to bring charges against users themselves, as such, follows the spirit of Section 230 of the Communications Decency Act (CDA):

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

These “Twenty-six Words That Created the Internet” were legislated into law by Congress way back in 1996, enabling the entire Web 2.0/social media boom by protecting platforms from prosecution for the free speech exercised by others — namely the 3rd party users who are considered the publishers of UGC — on digital properties therein. While CDA still grants safe harbor unto the platforms themselves, it has by no means ever provided such broad immunity to said users.

The imminent lawless action problem

Above, I mentioned that the Constitution doesn’t protect expressions that have a “clear and present danger of serious, substantive evil”. Despite that being the FCC’s own assertion on its own website, it’s not entirely true. More accurately, that “clear and present danger” precedent was set by Schenck v United States in 1919, but it was superseded by an “imminent lawless action” standard from Brandenburg v Ohio in 1969, which featured a two-pronged test:

Advocacy of force or criminal activity does not receive First Amendment protections if (1) the advocacy is directed to inciting or producing imminent lawless action, and (2) is likely to incite or produce such action.

In other words, the Christchurch shooter’s content and all of its downstream propagators would maintain their First Amendment rights because they were documenting lawlessness that had already occurred, not inciting or producing future lawlessness — and not even likely to incite or produce such. (The prosecution of the shooter’s actual terrorist act is obviously a separate, criminal case.)

That means my proposed arbitration system would likely have trouble proving the imminent lawless action needed to shut-down violators. But, I’d suggest that the second prong of the Brandenburg test, “is likely to incite or produce such action”, could be an entry point.

Television networks regularly choose not to broadcast streakers at sporting events. (To be clear, they make that discretionary choice as part of self-regulation; there’s no law or regulation that forces them to abide, per se, as discussed above.) They also try not to broadcast terrorists, for many of the same reasons: to avoid giving offenders either undue celebrity/martyrdom; an audience to influence; or the opportunity to inspire copycats. Of course, there are exceptions, like the Boston Marathon Bomber’s manhunt in 2013 that required the broadcast of the suspect’s identity in an effort to protect and serve the public amidst a potentially dangerous criminal at-large. But, some of those reasons for choosing not to broadcast — particularly martyrdom and copycats — would otherwise be likely to incite or produce imminent terrorist action.

In that vein, a large cache of multidisciplinary research has achieved consensus regarding the tactics of terrorist recruitment and the transfusion of radicalization. While terrorists’ methods will no doubt evolve, that research reveals that free and open web content is critical infrastructure for these pariahs. With respect to false positives on one hand and expediency on the other, the dark cloud of this content is like pressure building-up within these platforms, and it needs a release-valve — an outlet to some form of due process. The arbitration system would be that outlet, and it would be best suited to contemplate the subtleties of these cases, including the applicability of imminent lawless action, as supported (or denied) by qualified research, on a timely basis.

Exposing the word “likely” therein to interpretation is undoubtedly a slippery-slope. Too broad, loose, or liberal of an interpretation can establish a precedent that would run amuck in all manner of unforeseen ways. For example, we wouldn’t want a Presidential administration weaponizing this against journalists and the free press. Although, conveniently, I’d point out that these more traditional publications fail the arbitration system’s first proposed qualification, in that they’re not “free, ad-supported, web and/or app platforms that are fueled by user-generated content (UGC) and predicated on network effects”. Specifically, more traditional publications aren’t predicated on network effects, per the classical definition:

[T]he effect… that an additional user of a good or service has on the value of that product to others. When a network effect is present, the value of a product or service increases according to the number of others using it…

Network effects are commonly mistaken for economies of scale, which result from business size rather than interoperability… Interoperability has the effect of making the network bigger and thus increases the external value of the network to consumers... primarily by increasing potential connections and secondarily by attracting new participants to the network.

Whether a blog, BuzzFeed, or The New York Times, an additional reader doesn’t increase the value of the product for other readers. In contrast, the value of Facebook, Twitter, and Google increases with every incremental new user — a positive externality with a positive feedback loop.

I’d like to say that the appeals system proposed above — including case referrals up into proper courts — will end-up refining the interpretation of “likely”, but even were that to eventually render a Supreme Court opinion, it would come a considerable cost to many involved.

So, I don’t pretend to have an airtight proposition for this pitfall — and many others that have gone unmentioned thus far. Like governance in general, the smooth functioning of this proposal would require tight guardrails, iteration, checks-and-balances, and ultimately, inextricably, trust in the system…

The devil is in the details (and other problems)

I know this is easier said than done. For starters, it would be hard to find qualified knowledge workers to fill these roles — especially those on the front-lines as arbitrators. Let’s call this the “labor problem”. However, these would be new jobs with good wages, paid endogenously by the private sector in a self-sustaining closed-loop, as described earlier, rather than exogenously by the public purse ad infinitum.

The borderless, supranational nature of the web would also present a challenge. A principally geographic arbitration solution is hard to enact amidst issues of political jurisdiction and VPN loopholes. Let’s call this the “geopolitical problem”. Perhaps it would require a multilateral treaty among UN constituents who agree to uniform standards. Nonetheless, even void of a robust fix for this problem, the arbitration solution would still act as a deterrent for regular, modal, US consumers who contribute to the propagation of this trash.

In addition, this would result in sending a lot of the trash underground, into darkness, and off-the-grid. Let’s call this the “underground problem”. While admittedly a small consolation, at least this cause-and-effect would add friction to viral distribution — per the old “public square vs private living room” analogy.

Finally, albeit the most efficient process available, arbitration logistics would still be relatively time-consuming from start to finish, which would almost certainly mean that the damage would have already been done by the content in question by the time that a verdict is dealt. Let’s call this the “latency problem”. Again, all I can say is that the benefits of deterrence are better than nothing. Witting violators have justice served; unwitting learn their lesson. Ignorantia juris non excusat — ignorance of law excuses no one.

All told, these caveats will probably require different solutions of their own, but at least progress against one problem is better than stasis. Regardless, let’s walk before we run here. The proposal herein is a workable “seed for the conversation”, as I’ll keep repeating. It requires collaboration among multidisciplinary experts, not to mention public comment, so we can build/measure/learn and arrive at something more implementation-ready.

Sustainable system design

The system itself needs designing too, and we can all start by referring to the work of Nobel Prize-winner Elinor Ostrom, who spent her career studying “Common Pool Resources” (CPRs) — open access natural resources like forests, fisheries, oil fields, grazing lands, and irrigation infrastructure. Based upon her empirical observations of economically and ecologically sustainable ecosystems, Ostrom identified eight design principles for averting a tragedy of the commons

  1. Clear definition:
    Clearly define the CPR’s contents and participants (and clearly define excluded contents and participants).
  2. Local adaptation:
    Adapt the CPR’s rules to fit local conditions and culture.
  3. Self-determinism:
    Assure that the CPR’s governance is autonomous and sovereign, by virtue of being recognized by outside authorities.
  4. Democratization:
    Empower those affected by the rules to directly participate in the CPR’s governance.
  5. Evaluation:
    Systematize processes for monitoring participants.
  6. Conflict resolution:
    Provide cheap, easy-to-access mechanisms for conflict resolution.
  7. Sanction:
    Establish graduated sanctions for violators.
  8. Hierarchy (if necessary):
    In the case of larger CPRs, organize multiple layers of nested governance bodies, with small local CPRs at the base level.

Some of these criteria will be harder for this proposed arbitration system to meet than others, but the system’s edifice should follow this outline — and everything I’ve outlined thus far does.

What say ye?

In closing, I’ll reiterate that this solution is Constitutionally workable. At the same time, if ever there were a Constitutional limitation on a solution that’s palatable to “the people, the media, the politicians, and now the industry itself”, then it’s hard to imagine that the quorum won’t make it fit.

If the hand doesn’t fit the glove, then politics has a way of making a glove that fits the hand. A lot more has been pushed-thorough with a lot less support and a lot less precedent. Consider the government’s tactics with historical analogs like the telephone industry, Hollywood entertainment industry, and broadcast/cable TV industries — all of which was documented by Tim Wu in The Master Switch: The Rise and Fall of Information Empires.

This is your future; have your say here

In the next 5 years, do you really not expect Congress to push-through something to regulate these platforms — no matter how (in)effective or heavy-handed its poetic license? Democracies and autocracies across the globe are already plowing-ahead with their asynchronous solutions. What I’m proposing for the US seems like a lot better seed for the conversation — one that’s palatable Constitutionally and otherwise.

Furthermore, it doesn’t require government intervention to implement version 1: A necessary and sufficient incentive is embedded in a Prisoner’s Dilemma of sorts, which coaxes social platforms themselves to opt-in to the independent arbitration system as an industry-led initiative — meaning voluntary participation from Facebook & Co, Google & Co, Amazon & Co, Epic, Twitter, Pinterest, Snapchat, Medium, LinkedIn, TikTok, Pinterest, etc — that they might collectively lobby federal, state, and local governments to more fully empower, as required by Ostrom’s guidelines.

So, I ask that my followers, my audience, the public, and public servants use the comments section herein as a forum for iterating off of this proposal’s foundation. It’s your future, and now it’s your turn. ATTN: Speaker Pelosi, Leader McConnell, Chuck Schumer, Elizabeth Warren, Keith Rabois, Tim O'Reilly, Sam Altman, Jeff Jarvis, David Perell, James Allworth, Jack Dorsey, Ev Williams

An example of a self-sustaining system

This is how readers of research, blogs, and news get informed and inform others with modern efficiency. It’s an app for web and mobile that lets readers highlight and take notes on articles as they read, and their annotations contribute to crowdsourced summaries of those articles to help inform more passive readers. The resulting knowledge network gives you highlights of everything you need to read and lets you annotate anything you want to save for yourself. Check it out, it’s called “Annotote”, your antidote to the information overload:

All signal. No noise.

--

--

Anthony Bardaro
Adventures in Consumer Technology

“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away...” 👉 http://annotote.launchrock.com #NIA #DYODD