Theo van Doesburg — Architectural Analysis (1923)

I wrote this post for for Design Assembly (UK) back in 2011. I wanted to publish it after reading Kevin Kelly’s excellent new book (in particular chapter 1: Becoming)

Looking back as little as 5 years ago, online publishing was mainly about enabling users to publicize content. Systems allowed users to use a dedicated channel, with an easy to use content management systems. This resulted in an influx in the amount (and the sources of) online data and the popularity of systems like Blogspot, Wordpress and Posterous grew rapidly, resulting in a vast expansion of the blogosphere.

However, this has changed over the last couple of years. The widespread popularity of Twitter and the intertwining complexity of online services resulted in a shift in the publishing culture and the consumption habits of users. Whereas before the focus was mainly about a steady stream of content, users can now control feeds and content in new ways which affect publishers. Users now have the ability to filter and customize not only the sources they follow but also the flow of data. Examples of that are dashboard settings, wiring RSS feeds and setting action hooks.

Publishers ought to adapt to this new ‘ecosystem’ in order to maximize the reach and effectiveness of their content. Each unit of content can, and should, stand independently, as opposed to only being an extension of the channel it originated from. Content often lives separately from its original hub. This is as true to an image on Designspiration as it is to a page 
on Checkthis.

An image, a piece of text or a video should be able to stand alone while conveying as much as possible of the original message, and link back to it. It should be in the highest resolution possible (for the format), and be credited back to the publisher. The latter is crucial, and often neglected by users who aggregate it — without a clear credit the chain can’t be traced back to the source.

As users follow content and less its sources, those units of content often operate remotely. The consumption of data is taking place in dashboards and desktop clients. The previously clear line between publishing and consuming content has grown thin. Data is moved around as opposed to being compiled and then consumed. Twitter and Tumblr are good examples of this, they both allow information to be filtered–out and presented to the user in a clearer manner. The noise is separated from the content, and news items can be consumed in a much more customizable manner. Interacting with data on such platforms often means aggregating it, liking a post on Tumblr for example, or passing along a link on Twitter.

The average user has much more data to filter through and new products need to work within that workflow. Experiences need to demonstrate an understanding of the clutter that users face and allow the apropriate level of reduction.

Products that will enable users to reduce the noise will ensure better focus on the product and its message. Some recently introduced services experiment with new ways of achieving this. By analyzing social channels for example, the system filters through different circles of social activity and only renders the valuable content.

When editing inputs, the user is inevitably left with more wavelength to the product at hand which in return extends its usability.

It is a shift from sheer sharing, blogging and following, and it offers longevity and real value to the user, and the product.

2016: these are topics I still think about. Feel free to join or follow.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.