Re: Standardization in Civic Tech

A response to Abhi Nemani’s Small (City) Pieces, Loosely Joined


Per his suggestion, below I reprint a letter I recently wrote to Abhi Nemani, an old college friend and civic tech extraordinaire, in response to his recent medium essay.

Hey dude,

I enjoyed your latest civic tech essay and from it had a few thoughts to bounce off you in PPE fashion.

How does standardization really happen?

Your digiitalcityservices.com experiment certainly seems worthwhile but in the long run baseline standards take not only collaborative norms viz a viz transparent comparisons but often ultimately is driven by $$$$.

(In the absence of a clear monetary winner, often the urge to standardize results in simply more standards as XKCD nails.)

For instance, City consolidated annual financial reports have a strict structure and professional standards that driven not only by professional associations / norms but also the fact that the bond market wants all that data before they lend money.

So maybe state / federal race to the top style incentivization grants or revolving loan funds?

What really matters?

I just read David Gelernter’s (early yale CS prof / cybervisionary / reason the unabomber went unabomber) book Mirror Worlds. Beyond the early 90's funny techno-verbiage (that pretty accurately described the advent of things like social media and cloud computing btw) David offers some interesting insight for how the increased sophistication of virtual “mirror worlds” will impact government.

One of the more interesting ideas there is his notion of “topsight” or using the huge amount of data collected by the virtual world to get a sense of the big picture and run simulations to test hypothetical policy changes. He envisions simple evaluations of local services to tackle questions like:

Are after school programs in my area good? Am I getting a good bang for my taxpayer buck on road construction? What are the problems in my community that aren’t being addressed?

Those are the sorts of questions that IMHO really move beyond transparency and customer service improvement to transforming the management of existing governmental services.

How do we make meaning out of all this data?

The trendline certainly seems towards having open, machine readable data as the default in the next few years. The focus is often put on creating some sort of mega-dashboard (or suite of slick app dashboards) to make meaning out of that un-analyzed data. Yet if I wanted to tackle those sorts of general management questions I would look to Moody’s / Fitch / S&P rating reports.

As you note there’s a ton of local context and I’m not sure you can capture all that for every City / water district / school district everywhere with some sort of omni-algorithm.

So how do we ensure that this data gets effectively analyzed to drive better government decisions? Just some food for thought

Cheers,

PA