Critical Study of Digital Sociability as a design concern for Designers in the Digital Age
My design practice centres around two aspects of society: sociability and digitality. On the surface the mainstream typifies digitality as an undermining factor of human sociability. It could be seen as a form of simple ageism: the elders finding faults in the Millennials. However: As I see digitality creeping into every aspect of the social order, I see the bias against extreme digitality as was the sexuality and ethnic bias of the last millennium.
Therefore, my critical study will examine all relevant aspects that make “digital sociability” to be a dimension of concern for designers as gender issues, sustainability, and originality is to them.
Parts of my study will include:
- What is a hyper-connected society?
- What is the noosphere?
- What class civilisation is humanity? - How is digitality affecting sociability?
- Are we approaching a singularity?
- How meaningful are nonphysical interactions? - The case for and against privacy.
- Are closed gardens a false privacy or privacy at all?
- How different are closed gardens from firewalled states? - How do we face the problem of digital illiteracy?
- Is digital literacy a subset of economic conditions?
- What is digital literacy really part of? - How differently are we socialising today?
- What are the range of social expressions in use today?
- Who are the audience of our social expressions?
Footnote
This critical study expands upon two of my other essays, which you will find attached in the appendix. I may have used the term “Cybersphere”, where I tried to equate the “cyber world” as being nothing more than a layer upon the spheres of our world.
A: An Introduction to the Noosphere
The Twenties
Often, when discussing our planet Earth, we break it into at least two parts: the geosphere, and the biosphere. The former is a discussion about the physical properties of the planet, and the latter is a discussion about the “unique” component of the planet of which we are part of.
It was in the 1920s when geochemist Vladimir Vernadsky published his work, arguing that living organisms could reshape the planets as surely as any physical force. Alongside it, philosopher Pierre Teilhard de Chardin developed his theory called the “Noosphere”.
In Vernadsky’s theory of the Earth’s development, the noosphere is the third stage in the earth’s development, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transformed the biosphere.
We know the geosphere in our everyday life as the ground beneath us, the air around us, and (for those fortunate enough (or not) to live on the coast) the sea. Each of these in turn are called the lithosphere, the atmosphere, and the hydrosphere. According to the aforementioned theory, there is another sphere, one which we are increasingly immersed in modern everyday life: the noosphere.
We know the following:
- Lithosphere “medium” is the soil,
if we observe waves propagating through on it is called earthquakes. - Hydrosphere “medium” is seawater,
if we observe waves propagating through it is called waves. - Atmosphere “medium” is the air,
if we observe waves propagating through it is called storms.
A wave propagation is when a medium moves either because of a disturbance or an oscillation is applied to it. If the Noosphere is the “sphere of human thought”, then it’s “medium” must be humans themselves; and to see a wave propagating through human beings is simply “communications”.
Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transforms the biosphere.
Any given object in our physical world is made mostly of atoms and lots of empty space. It takes a human being with conscious thought to give it significance. It is this thought, when communicated, that propagates from person to person: nothing is physically transferred, but the “medium’s position” (for a human it would be their position in relation to the object, their point of view, hence opinion) that changes.
Noology
A meme is “an idea, behaviour, or style that spreads from person to person within a culture.”
Richard Dawkins
Coined by biologist Richard Dawkins, like genes in biology, memes are like a unit of self propagating cultural information. An analogy whereby biology is the “interesting interaction of matter as defined by genetics”, then noology is the “interesting interaction of people defined by memes”.
Let us examine the interaction between the units of the medium: humans.
The six degrees of separation: an idea first proposed by the Hungarian author Frigyes Karinthy in 1929 hypothesised that “using no more than five individuals, one of whom is a personal acquaintance, he could contact the selected individual using nothing except the network of personal acquaintances.”
However, research in 2011 was made possible by the advent of social networking websites, shows:
Facebook limits users to having 5,000 friends, but the median figure was far lower at just 100 contacts, or 0.000014% of Facebook’s total membership. Despite this relatively small number, the results showed 99.6% of all pairs of users were connected by five degrees of separation, and 92% were connected by four degrees. On average, the distance between any two members was 3.74 degrees.
Similar research on Twitter shows the degree of separation of all it’s users lies between 4.67 and 3.435. This comes to show that the Small World experiment actually works, at least among the people of the world who are connected to the World Wide Web’s most popular social networking websites.
For the purposes of this discussion, I would like to define a hyper-connected society as such:
Hyper-connected society, is when all members who socialise with each other have many and potentially short paths to connect with each other member and the society as a whole. These members should also be able to receive or perceive the mood or consensus of the society quickly and easily.
This basically means that every part, and subset of society has the potential to affect and be affected by everybody else, very much like particles and molecules in chemistry. Any student of chemistry would be able to tell you that when performing titrations (for example) the solution changes practically instantaneously (although in reality the propagation of the chemical reaction is not actually instantaneous) if the concentration is high enough.
Virality
It is only through a hyper connected society do we see the difference between a meme, and a “viral” social media event.
In the past, where ideas had to be spread from the town square (people literally shouting out their ideas for passersby to listen in) or like a wave through the crowd in a stadium, slowly the geographical origin of ideas took a back seat. The advent of the printing press is celebrated for propagating information en masse. Another way of looking at the printing press:
It has allowed memes to be spread irrespective of the geographical origin, as far as the words can be transported in books, transcribed into a different language, and passed on further. These books may yet survive the death of the author, and be passed down through time.
The printing press allows memes to spread through four dimensions.
Very much like their biological namesakes, viral social media events (example: viral marketing) spread faster than memes, which usually take a while to “germinate” in forum discussions and blog comments, before being shared around the Internet.
The “instantaneous” nature of the Internet has allowed information and memes to be spread irrespective of the author’s place in the world, and has the potential to break the sound barrier (in 2011, news of an oncoming Earthquake was spread ahead of the actual quake itself). The noosphere is technically: hyperspatial.
Richard Dawkins argues that memes are easily understood because of natural selection: if an idea has no traction, it just peters off and dies. However, artists and designers devote their work to trying to convey, essentially keep alive, a particular idea. Are we then going against the grain of natural selection of memes?
Nikolai Kardashev
The Kardashev scale is a method of measuring a civilisation's level of technological advancement, based on the amount of energy a civilisation is able to utilise. The scale has three designated categories called Type I, II, and III. A Type I civilisation uses all available resources impinging on its home planet, Type II harnesses all the energy of its star, and Type III of its galaxy.
Carl Sagan’s calculation of where humanity stands on the Kardashev scale (developed in 1964) is around 0.7, undefined by the scale and less than a Type I civilisation.
Taking this as inspiration, what “type” of civilisation are we with our ability to communicate? We do have the World Wide Web, part of the greater Internet, that spans the whole world*.
* not quite true, since more than a billion people (mostly in China) work and live their lives behind a firewall.
Internet memes are not known to take off in the People’s Republic of China, as are any really populist ideas. This is because journalist are forbidden from reporting on certain topics, social networking websites are required to self-censor certain keywords, etc. Still, like the ability to survive of life/genes in the form of microorganisms, those “netizens” must use unnecessarily clever riddles in the form of a particular combination of numbers and words to communicate their ideas. In this light, memes take the form of “thoughtcrime”.
This means that although some major world cities, and some entire countries are “hyper-connected societies”, not the whole world is. A truly hyper-connected humanity is the exact opposite of the world outlined in Orwell’s book, Nineteen EightyFour.
I propose humanity is still a Communications Type 0 Civilisation.
Until efforts like the Open Data initiative become common, and if humanity can come to a consensus about common topics (such as how old the Earth actually is), humanity will probably remain at Type 0.
B: The Digital Age
Digitality
Referring to a sociological period analogous to modernism and postmodernism (the two referring to the decades at the end of the 20th Century), Digitality primarily refers to the current sociological period of the early 21st Century. Aspects of the condition of living in a digital culture include:
- ubiquity of speedy digital communications,
- requirement of participation in interactive media,
- availability of thoroughly indexed information.
When the economic or cultural state or condition of society has changed significantly, a paradigm shift has occurred.
Both the Noosphere & the Kardashev scale were developed in what is now called the Modern era, both concepts are only “relevant” in their contemporary world going forward, meaning that the conclusions derived from discussing these concepts would be unchanged if within the same era. Fast forward to contemporary history, and things start to change.
At the dawn of the industrial age, trains and the telegraph was the paradigm shift for society, shifting from mere telecommunications (communications where the participants are unable to be physically present but their intent can still be communicated) to the availability of fast reliable transport. It even brought about a level of synchronisation (the creation of timezones to avoid train collisions).
The Internet made the distance and presence of the participants irrelevant because of the sheer immediacy of Internet-enabled computerised telecommunications (no longer needing an “operator”) and the ubiquity of information about almost everything that anyone would need to know and everything that ever was.
Rather than look at “what we used to do, and now do differently”, there are a few things that simply didn’t have an equivalent before Digitality.
Where we used to need to be at a particular venue to listen to a musician play a piece, we can now enjoy music on our own, wherever we are. This paradigm shift of the consumer’s behaviour changed music into a commodity enjoyed the same way as books (itself brought about by the printing press).
Pessimists would point to this as the negative effects of cocooning.
Reflection or Annotation?
Looking back at my now infamous discussion of the “Cybersphere” during May 2009’s FutureEverything “social media cafe” session, I found that the objections raised were because preconceptions on what I was theorising.
The term Cybersphere was objected to, and instead the term Noosphere was suggested. There was, however, a difference: the Noosphere is the theoretical sphere of human thought in our world, whereas the Cybersphere obviously referencing “cyberspace” or the Internet as a whole, suggested that the Internet is not a separate “world” as our physical world is, but an augmentation of the physical world.
Thoughts and memes originate and reside in the noosphere. If we assume art, or any media (such as graphs and charts) to explain or illustrate thoughts, then that is a “manifestation” of the noosphere into the physical realm; yet all of these “thoughts” are the result of processing an input from the physical world. Likewise, photographs or articles online (such as Wikipedia) are “representations” of some object in the physical world.
From post-modernity to digitality, there has been a shift of focus with technology. No longer content to “simulate” the real world in a computer (physical world to noosphere), there has been efforts to bring aspects of the noosphere into our physical world. This corresponds to “Virtual Reality” technologies being downplayed for “Augmented Reality” technologies.
Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.
We could say that instead of making copies or exact replicas of things found in the physical world in the noosphere, AR allows us to “annotate” the physical world with information of it. Thereby bringing both “worlds” together instead of literally “making virtual reality real”.
The above example assumes the Noosphere and physical world (includes both the geosphere and the biosphere) are not equivalent, and objects in one can have an equivalent in the other.
Digital Applause
A singularity, of the technological kind, is a theoretical moment in time when human nature is radically changed that future civilisations’ course in history is unpredictable or even unfathomable: a paradigm shift. While most accounts will cite artificial intelligence or advanced nuclear energy as the source of this change, I choose to focus on the human element:
What if we are all components of a greater organism called the human species, that our unstoppable rate reproduction and spread across the world, combined with each persons specialisation, akin to specialisation of individual cells and organs in our bodies, will create a paradigm shift?
Like examining individual cells in the human body, each unconscious of it’s place in the grander shape of a human being, what if our individual interactions amount to very little effect on the human species as a whole?
“To greet (their leader) together as one super-organism with one voice. Applause.”
Michael Stevens (Youtube: +vsauce)
He explained how clapping is the “diarrhea of noise” that is simply an overflow of enthusiasm. I take this to be a common example of hyperspatial or nonphysical interaction. In his video “Why Do We Clap?” he explains how the listener is unable to discern the gender, size, or otherwise physical attributes of the clapper, it anonymizes the audience’s appreciation.
“Alone on the Internet we don’t applaud; but we do like, and share, and favourite, and retweet. Those actions might be a sort of ersatz applause. IRL your clap is lost into the crowd, aggregated into the total sound, and online so are your likes and favourites. They join a collective gesture as a sort of digital applause.”
What is interesting is he later notes that unlike traditional clapping, this “digital applause” is traceable. I would argue that this is a really good thing for social discoverability.
Therefore, appreciation in the age of digitalism is moving towards a “bichotomy” of becoming a singularity, and more individual connection between the performer and the appreciator.
Relationship Saturation
In a hyperconnected society: every person is able to communicate with another person, everyone and everything can be easily referenced. In other words, every person or object can have some kind of significant relationship. The very nature of the most famous part of the Internet, the World Wide Web (WWW), demonstrates this.
The World Wide Web uses the hypertext transfer protocol (http). Hypertext means “more than text”, where text is a linear, possibly chronological, string of information. Each bit of information usually relating to the one before or following it. Hypertext by nature allows the reader to move from any point of the text to any other point on the string or another string entirely by way of “links”.
If everything is connected, then idea and memes can more easily propagate through the noosphere. When a civilisation achieves Communications Type 1 at least, there would be consensus on planetary issues.
Many people would cite the blandness of the world describes in Orwell’s book 1984, and the “fakeness” of Yevgeny Zamyatin’s world in the novel We. In recent years, there has been a movement to design and manufacture with a locality in mind. Manufacturing decisions that do not only hinge on the production costs, but on the significance of assembly for the product. Products made in sweatshops were discouraged, and local assembly was introduced. The notion was even if less efficient and less cost effective, it meant the consumer would be able to appreciate the product, thereby giving it significance and building a relationship with it.
It may be for this ironic (and admittedly postmodern) reason that Burberry’s ironed-out vision of British design is winning out over the rather kitsch vision of Britain by Mulberry. Also why the mass produced iPhone sells better than business specific smartphones these days.
In the BBC documentary “Secrets of the Superbrands”, Apple seems to have made use of elements found in religion: evangelical staff, objects on pedestals, espousing a higher ideal of design, etc all the way to having their staff chap and shout at the opening of a store. Rather than have a store of glass cases where the customer can appreciate a product like in a museum (which unfortunately has the subliminal effect of telling the customer this is from the past), having products proudly on display as if it were something that everyone should aspire to (the future).
The clapping and shouting at the opening of a store speaks very deeply to humans as a deep seated consensus building event. Babies learn from an early age that putting their hands together form a simple percussion instrument. All these techniques speak not of how true a religion may be, but the effectiveness of the technique to imbue a point of view upon the audience or customer.
C: Privacy is dead
The David Brin Question
Named after the science fiction writer who argued in 1996 that the issue isn’t whether surveillance will become ubiquitous (given technological advances, it will) but how we choose to live with it. Sure, he argued, we may pass laws to protect our privacy, but they’ll do little except ensure that surveillance is hidden ever more deeply and is available only to governments and powerful corporations.
Long live transparency? It’s no secret that each government has it’s own surveillance programme, the more pervasive of which are given labels like “Big Brother”. When investigated, it was found that the United States of America’s National Security Agency (NSA) played “little or no role” in disrupting malicious activities, it’s fair to ask whether its programmes are worth the cost, either in tax payments or in degraded privacy.
“Privacy is dead — get over it”
― Steven Rambam
It isn’t just the NSA. The Federal Bureau of Investigation (FBI) cracked down on the anonymous online black market The Silk Road; even the Tor Network (used by activists the world over to avoid censorship) is being watched by PRISM.
“If you EVER thought you had privacy online, you’re an idiot.”
― Larry Fein
It is strange to think, that the concept of privacy is linked to questions of secrecy, and in turn: surveillance. Online privacy is crucial for crime victims, whistleblowers, and dissidents; but it is also relevant to digital design practices: many apps and systems require logins. If a user cannot trust you will keep their data safe, why should they give it to you? Eventually, this circle of mistrust will cause digital citizenship to plummet, and eventually digital literacy will be gone as well.
“Private blogs”: It is a fallacy to assume traditions and behaviours we used to have in the analogue world, such as private diary keeping, can be translated seamlessly into digitality. Any information saved online, whether or not behind a password protected account, is inherently saved onto a database that has the potential to be indexed.
WISPA CALEA Compliance Guide, which details most of the rules provided by Wireless Internet Service Provider Association (WISPA) that wireless ISPs are required to follow by Communications Assistance for Law Enforcement Act of 1994 (CALEA), mentions the following:
The ISP is not allowed to tell you that you are being snooped on (enforcing the privacy of law enforcement);
While it is true that in a hyperconnected society, everyone is able to access almost anything and contact anyone else, but society expects this to be for information people intentionally put up for others to see. The “appeal of online services is to broadcast personal information on purpose.”
It is probably the norm to be offended by stalkers, people or entities who are watching what you are looking up without themselves putting any information. Very much like how masks function, we find the lack of ability to recognise to be creepy. How is it to recognize something? When we notice it and it gives us some kind of information, for example, their identity.
“I’m getting penalised for what I do in my own living room now?!”
So while it may be fun for an installation art to have it’s function partially hidden, to be discovered by the participant, it’s a different matter entirely when it comes to digital design. Whether the issue is privacy (the object secretly takes your photo to produce the effect of surprise), discoverability (the object’s intent is hidden as we do not even if it is capable of it), or usability (the function isn’t placed prominently enough for a user to notice).
Firewalls
One of the most talked about topics regarding Internet privacy, let alone anonymity, is China. Given that China’s booming economy is attracting a lot of investment, it would be no surprise that designers in every field would be attracted to potentially work with or in China at some point in the future. Issues pertaining to digitality in China is complicated.
The “Golden Shield Project”, or as it is commonly called by the rest of the world the “Great Firewall of China”, doesn’t just involve VPNs. It is common knowledge that companies, both local and foreign to China, self-censor what their users post or save. Jason Ng of blockedonweibo.tumblr compiles and explains a list of terms that are blocked on one of China’s most popular social networking websites.
Shi Tao, a Chinese journalist, was jailed in 2005 after Yahoo! released information about his private emails to the Chinese government. The Chinese staff of Yahoo! intercepted these emails and sent the journalist’s reportedly bad impression of the country to the Chinese government, which in turn sentenced the journalist to ten years in prison.
In this case, not only is the government of China blocking domains and deleting items on Chinese social networks if they happen to use a keyword or a particular combination of digits; but companies are blackmailed into complying or even performing self-censorship of their own.
The tiniest silver lining is that “most” of the internet is still accessible from China, and you’re simply not allowed to say certain things; and the fact most of their surveillance tools are homegrown or legally-economically forced upon companies. It could be worse.
The Internet is built upon the idea that each Internet-working Service Provider (ISP) allows anyone to connect via a series of connections and services to another user on another network. An intranet on the other hand means “within network” which means it only allows connections and services to and to be accessed by users on the same network.
Iran throttles internet speeds of civilian Internet Service Providers, blocks access to a long list of domains (second only to China), and plans to someday implement a “halal Internet, essentially a “national intranet”.
A national intranet is an Internet protocol-based walled garden network maintained by a nation state as a national substitute for the global Internet, with the aim of controlling and monitoring the communications of its inhabitants, as well as restricting their access to outside media.
Countries that have implemented a “national intranet”:
- North Korea
- Myanmar
- Cuba
Walled Garden
Similar to a real walled garden, a “walled garden” is a system whereby nothing can enter or leave it, but is only allowed to flourish (if at all) within it’s boundaries. The sad truth is most of us are already on our own “Global Intranet”: Facebook.
Everyone who uses a smartphone is part of a walled garden, where apps have to be pre-approved to be “let-in”. Anyone who has read a book on Amazon’s Kindle, is actually making use of Amazon’s closed book database. Anyone who has downloaded music from iTunes (although their catalogue is almost all encompassing) has essentially made use of Apple’s preapproved list of music.
For students, one famous academic Walled Garden is JSTOR. In programmer & activist Aaron Swartz’ Guerilla Open Access Manifesto (July 2008) he said the “world’s entire scientific and cultural heritage, is increasingly being digitized and locked up by a handful of private corporations.” He found out the hard way that even for the storage of academic papers, downloading them in a way that contravenes the User Agreement caused him to be considered a felon with a federal lawsuit.
Walled gardens and national intranets are the artefacts of “cocooning” on a national level.
Forget geographical boundaries drawn on the literal smoothness of the noosphere. In an age where corporations are “persons” (see below), these multinational entities have “privacy rights” that protect what they do with your data.
Could a corporation legally be considered a person? There is a degree of equivalency, in the US this notion is called corporate personhood. It’s the notion that corporations have certain legal abilities typically ascribed to humans. / The recent concerns about corporate personhood are less about the idea itself, and more about the degree of personhood: how much of a person is a corporation?
Jon and Ben (Youtube: +ConspiracyStuff)
While they discuss the level of free speech a corporation can have, I ask if a corporation has a right to privacy. When we input our details into “our” user profile on a social networking website, we are essentially telling the system about ourselves, do we then expect it to be intentionally watch everything we do on the system?
This question of privacy between the users and the system has been around since the dawn of telecommunications. While in the past, trusting your postman not to open your letters, or trying to ignore the fact the telephone operator is overhearing your entire conversation, today we have to grapple with “inhuman” systems and computers which are the public face of the digital age.
Anonymous
By October 2003, nearly 11% of the world’s population was online, 4chan was founded.
Anonymous is a “a loosely organised consortium of hackers formerly linked to 4chan” who have been credited with scores of cyber-attacks on various organisations. Most recently, Anonymous has been linked to DDoS attacks against anti-Wikileaks’ organisations.
News networks like the BBC and Al-Jazeera have been painting Anonymous less as a malicious group, as the KTTV Fox 11 News report in 2007, and more as if it were an organised entity with representatives and hidden leaders of their own; i.e. a cyber Al-Qaeda.
It is no surprise though, why it has been infamously associated with a website whose Japanophile users would anonymously post discuss comics and cartoons; because any self-competent investigative reporter in the year 2006 would find http://4chan.org the top search result in a Google search, resulting in the now infamous tirade by Paul Fetch, a KTTV investigative reporter, who labelled them “hackers on steroids” and “domestic terrorists”.
Particularly interesting is how 4chan, the American offspring of 2chan from Japan, has created an environment where the online dis-inhibition effect has flourished. Every post made on these websites do not need a username, so they are all assumed to be “anonymous”.
Although not affiliated, the confluence of users of social networks where users feel empowered and anonymity granted by these imageboards has caused these Netizens to attribute themselves together as Anonymous. The very nature of Anonymous is that it is an anarchic collective entity of seemingly random users whose membership is through mere attribution; because of this, it’s membership has spread beyond 4chan into various other social websites.
At the 2011 South by Southwest (SXSW) conference, a festival that started in 1987 that has since become an annual pilgrimage for every influential web-designer and Anonymous Ninja, Christopher Poole the founder of 4chan said Zuckerberg was wrong to equate online anonymity with cowardice, and:
“Anonymity is authenticity. It allows you to share in a completely unvarnished, raw way. To fail in an environment where you’re being identified by your real name is costly. We value content over creator.”
In his book, Nassim Nicholas Taleb asks that if being fragile is something undesirable, what is the opposite? Most would answer “resilient”. He explains that a fragile object breaks when pressure is applied. A robust object simply takes much more pressure to break. He suggests that the opposite of fragility is when an object becomes stronger when pressure is applied. We don’t have a word to describe such a property, so Taleb created one: antifragile.
Anonymous is an antifragile amalgamation of the opinions of most of the Internet-enabled human population. It is itself a meme, “you cannot kill an idea”.
D: Digiteracy
What is digital illiteracy?
Digital literacy is the ability to effectively and critically navigate, evaluate and create information using a range of digital technologies. It requires one “to recognize and use that power, to manipulate and transform digital media, to distribute pervasively, and to easily adapt them to new forms”.
A digitally literate person may be described as a digital citizen.
Marc Prensky coined the terms “digital native” and “digital immigrant”. The former refers to a person who is born into digitality (presumably today’s younger population), and the latter is someone who “adopts” technology later in their lifespan (presumably the older generation). While it is true that this may be part of what is causing the younger generation to resist digitally illiterate forms of education due to them reading, perceiving, processing knowledge in a completely new way, as well as speaking to each other in an entirely new language.
Growth of the Oxford English Dictionary
- 1928: 252,200
- 1986: 321,500
- 1989: 291,500
- 1993: 297,987
- 1997: 301,306
- 2005: 301,100
While it is true that each successive generation will use and adopt new words, and sometimes give new meaning to words, the “rebellion” of a new generation is not new to humanity.
Around the end of the 19th Century, labour laws were slowly being passed to prohibit child labour. The number of years spent in education was also on the rise, and marriages happening later in life. This all coincided with the end of Modernity and into Post Modernity. “Teenagers” were born. As buses and automobiles became widespread, this meant the youth could be educated at schools farther away from home. This meant schools became common ground to a wider population area, essentially triggering the beginnings of a “hyperconnected society”.
Knowing all this, it is no surprise that the generation who lived through the Post-Modern era would become ageist, and attempt to pass the buck on to the next generation by calling the digitally literate “millennials”, as if the end of the 20th Century would see the same kind of change as they saw.
However, very much like how the end of the 19th Century coincidentally saw the paradigm shift and hence the transition from Modernity to Postmodernity; the end of the 20th Century coincidentally (or not? Read up on the “Y2K crisis”) saw the paradigm shift and hence the transition from Postmodernity to Digitality.
It surprises many of the “millennials” that a “dot com bubble” occurred at the close of the 20th Century. It is laughable that these “digital immigrants” managed to confuse the digitally illiterate (who at the time held most of the investment funds).
Population & Censorship
Statistics of any country can be obtained, and used for forecasting the future of a country. Since a population doesn’t simply change in a short period of time (barring war or some other disaster), they are seen as a safe way to predict up and coming trends. As opposed to predicting whether a country is suitable for business forays based on economic policies, which are easily swayed by human opinion and politics.
Earlier this year, the United Nation’s Broadband Commission issued a report ranking every country on various Internet usage statistics. This report could find no correlation between any of the socioeconomic statistics and broadband breakthroughs. These findings are very relevant to an ever growing industry of web design, apps design, and digital design in general.
Note that there is very little correlation between the mobile and fixed line broadband penetration. While many web designers tout the advantages of responsive design, assuming the user would be able to view the website first on their home computer, then “take away” on their mobile phones as they leave home. Example: Google’s Chrome browser on both PCs and mobile devices, can access a shared “recently opened tabs”.
Another graph shows that it isn’t good to assume “fast growing economies” (indicated by GDP growth) will mean more mobile broadband users. While there are many new applications and systems that ask for microtransactions (otherwise called in-app purchases) thinking it is a clever psychological trick that consumers will likely buy a lot over a period of time than a lump sum at the beginning hence becoming a sustainable income, this may not be true at all. Perhaps this is why the freemium model works.
Population doesn’t affect mobile broadband usage either. Assuming a country has a large number of users doesn’t mean the infrastructure will be in place for it to obtain broadband.
The simplest assumption is that urbanites will have the best adopters of new trends and technology, hence have a high adoption of mobile broadband. This is not true either.
Recalling the definition of Digital Literacy, I would like to argue that national censorship of the Internet may play a part in digital literacy. Remember that a digital citizen must: be able to recognise the Internet gives them the freedom of communicate, to transform and relay any digital media, and to freely acquire any information on any topic.
Some statistics:
- United States of America. Population: 0.31 Billion. Population density (per sq. mile): 84
- People’s Republic of China. Population: 1.35 Billion. Population density (per sq. mile): 365
- Republic of India. Population: 1.24 Billion. Population density (per sq. mile): 954
The OpenNet Initiative and Reporters without Borders, class the above nations as such:
- United States of America. Little or no censorship.
- People’s Republic of China. Pervasive censorship.
- Republic of India. Selective censorship.
As the above illustrates: a dense, highly populated population is highly censored, making a large portion of humanity disconnected from the rest of the hyper-connected world. Yet it is this very population that uses overly clever tricks to evade censorship to transmit ideas and memes, making their digital literacy on par with those whose freedoms are secure.
I could conclude that the level of digital literacy worldwide (of the human population with Internet access) is actually nearly the same, but without the restrictions, this may change over time.
Urbanisation
You would assume that there should have been a high correlation between the degree of urbanisation against broadband levels, the more urban the country, the higher the broadband penetration. It would make sense that city dwellers who could afford a PC would demand higher Internet connection speeds.
However, this is not the case.
Perhaps it was simply because the rural areas had less amenities, they had to resort to using the Internet more to get the luxuries and services that urbanites take for granted like telemedicine and shopping. Perhaps it was the fact that rural areas wouldn’t waste the infrastructure they acquire, hence would Internetenable the few devices they had?
Given that computer acquisition and Internet-enabling them is not a problem, why do more non-urbanites, rurals acquire more and faster broadband? You could say it was a “Berlin problem” (referring to how Berlin was utterly destroyed in World War 2, enabling them to build it anew with completely new technology), such as the rural areas having no vested interests in the way of older forms of telecommunications to replace, that they could easily adopt a lot of new and fast broadband.
I would hypothesize it is the case of computer literacy.
Technology gone wrong?
“The illiterate of the 21st Century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”
― Alvin Toffler
You could say that digital literacy managed to not only bring about digital citizens, but a digital government. Instead of a population of people who can use information technology to it’s fullest economically as well as politically, today we see governments who “recognise and use information technology, to manipulate and transform digital media, to distribute pervasively, and to easily adapt them to new forms” especially in the form of crowd and meme control on social networking systems and websites.
This is highly relevant in design, as with sustainability concerns where the material originates, and economic concerns if the product’s cost is low enough to be priced profitably. Digital design, be it products that are mass manufactured after being digitally designed, or apps that are designed by freelancers in emerging economies, websites and apps hosted with servers located in countries with low taxes, rolling out a version of an app that is easily spied upon by state surveillance.
Should it not be a digital designer’s duty to be concerned with the local repercussions of their designs?
E: Eusocial
The Social Animal
“Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god.”
― Aristotle, Politics
The term “social networks” has been misunderstood in recent years, being identified with systems such as Facebook and Twitter; correctly termed, these are “social networking websites”. Social networks stem from the idea that units of people within society can form distinct and permanent relationships with one another, forming a network. This behaviour is not unique to humans.
In ethnology, insects such as ants and some species of rats, rank as eusocial creatures (the highest level of social organisation in the animal kingdom). An eusocial creature must have all of the following traits:
- Reproductive division of labor;
- Overlapping generations;
- Cooperative care of young.
“Reproductive division of labour” of labour is when a creature’s sexuality (or sterility) means it has been allocated into the worker class of their group or species. While this may have been true in humanity’s primarily patriarchal past, this is changing. Are humans no longer eusocial?
“Overlapping generations” is when multiple generations, such as parents and children, are found living and working together; unlike lion prides that generally only socialise in groups of the same generation.
“Cooperative care of young” is observed in humans in the form of schools and nurseries. We also have an aspect of altruism, saving or helping a child even if it is not our own offspring.
In Wilson’s latest book, 2012’s The Social Conquest of the Earth, he refers to humans as a species of eusocial ape. He supports his reasoning by stating our eusocial similarities to ants. / Through cooperation and teamwork, ants and humans gain a type of “superpower” that is unavailable to other social animals that have failed to make the leap from social to eusocial. Eusociality creates the superorganism.
What this means is if humans were to communicate more efficiently and cooperate tightly enough, humanity becomes either a force of nature of it’s own (the noosphere) and/or reach a singularity and move on to be a Communications Type 1 civilisation. However, we probably do it too well, considering how we barely pay attention to the biosphere besides our own in our daily lives, and how few of us designers ever consider product users “besides humans”.
Survival of the Nicest
The most common explanation for the diversification of life on Earth is the theory of evolution. However we already know some species, humans included, are capable of altruism; even the concept of eusociality has problems: Humans make the decision to be “cooperative” (such as babysitting an unrelated child) whereas eusociality is a behavioral strategy that is not specifically selected by an individual.
Zoologist and anarchist Peter Kropotkin wrote about animal altruism in 1902. The social Darwinist concept of the “survival of the fittest” has been ideologically challenged by researchers of altruistic behaviours in animals, that a new term has been coined: “survival of the nicest”. This may be relevant to digitality: how does liking, sharing, or retweeting someone else’s post on the Internet help your personal chances of survival? Does that post actually constitute work that helps the community?
I’d like to point out that the “reproductive division of labour” does happen among humans but not purely along sexual lines: teenagers are mostly exempt from work. This allows them to focus more of their energy in what is essentially very complicated mating rituals: better education, better sporting abilities, more attractive appearance and outlook, all contribute to a human’s probability of reproducing.
Culture is a difficult word to define, but there is a general agreement that it is the “intangible and un-quantifiable” artefacts produced by a society such as language, customs, art, poems, songs, etc as opposed to the physical artefacts, its “material culture”. All the free time from not being required to work, gave teenagers and young adults time to produce culture.
Culture became so important in human society that we have created a culture of applause if an individual produces something “nice”. As the importance of culture grew, young adults and even the more mature ones, must sit down and spend time to creatively produce more artefacts. The act of having to sit down, and make use of our large craniums, to device a better more creative expression came to be known as “design”.
Design, from Old French designer, from Latin designō (“I mark out, point out, describe, design, contrive”), from de (or dis) + signō (“I mark”), fromsignum (“mark”) is the creation of a plan or convention for the construction of an object or a system.
One often overlooked example of humans designing something “nice” for everyone else is the wheel, and the road. Everyone knows that besides fire, the wheel is an important invention in human technology as it allowed us measly humans to gain a mechanical advantage to move large, heavy objects.
How does developing a wheel on your body help you along the way? A giraffe having a neck that is just a little bit longer still means it will be able to reach food that is a bit higher. It will have more food, live longer, and will have more babies, making it’s mutation a lot more common; but if your mutation is a wheel that is only a little bit round, it doesn’t equal you making more babies. In order for a wheel to be useful for movement, it requires a prior invention: roads. Why didn’t animals build roads?
Michael Stevens (Youtube: +vsauce)
In a video titled “Why Don’t Any Animals Have Wheels?”, he explains that wheels do not exist in nature for a couple of reasons (wheels without an axle are simply rolling objects). The more important question was why didn’t animals build roads? It wasn’t because animals didn’t create structures (beavers built wooden dams, ants dug complicated anthills), but roads weren’t selfish enough.
Richard Dawkins examined this idea in 1996. Going back to the theory of evolution, you would only do something if it increased the chances of your survival. Yet we collectively pool resources to pay people to construct roads. This doesn’t mean the road workers had a higher chance of survival, and it wasn’t just for the benefit of taxpayers either, but anyone (even a tax dodger) or anything that stumbles upon it. “A moocher can come up and use it anyway, have time left to make a bunch of babies and prosper.” By this example, humans are the ultimate in animal altruism, and perhaps still an eusocial animal: we design not just for our own benefit, but for every human, even those we have no relations with or even met.
Cocooning
Ironically, the ability to design ever bigger and grander systems of infrastructure like roads led us to also create the Internet. Which for all things brought forward the possibility of a completely hyper-connected society, that helped smooth out the wrinkles on the noosphere, and sped up the spread of memes.
In the 1990s, Faith Popcorn gave this broader phenomenon a name: Cocooning. The Internet, Home Entertainment, Mobile Phones, Alarm Systems, Self Checkout, filters for our personal air and water, are all paraphernalia of cocooning. A tendency toward more lonely, solitary experiences in the last 30 years.
Behaviours that were translated from the analogue world into digitality include online shopping, online dating, massive consumption of movies and music, mass multiplayer online gaming, etc.
The Internet, is the technology that brought about digitality has either:
- caused humans to no longer be eusocial, or even be social animals;
- caused a paradigm shift that takes eusocialism to a new level.
Futurist Alvin Toffler theorised that in a postindustrial society (modernity onward), that information may substitute the dominance of material resources (the realisation of a perfect noosphere). He also predicted that the gap between producer and consumer will be reduced by “mass customization”, new technologies would enable the radical fusion of the producer and consumer into the “prosumer”. These “prosumers”, may indeed have come into fruition through the rising number of entrepreneurs freelancers, availability of self-assembly kits, the publication of open-sourced designs on the Internet, and the dropping price of 3D printing equipment.
An exhibition at the British Design Museum explored the future of 3D printing. Aside from the advantages to subtractive manufacturing, they claim 3D printing will allow for medium sized businesses to “take back” the manufacturing power from countries we presently use to manufacture the majority of our goods. An example given was if Britain could produce some of the products it currently outsourced to China to make, then Britain would be less dependent on a foreign power and be sovereign again. Cocooning on a national level.
Nationalism aside, this may be a step back in humanity’s eusocial properties. Wanting to do everything yourself instead of sharing the advantages is more selfish. The irony is we’ve come this far in technology that efficiency is now being sacrificed for pride. Or is it?
Take a step back and look at humanity today (or at least one ones living in digitality). We make almost none of the products we consume ourselves. We can easily step outside to buy any food. The work we’re required to do is either overly specialized or not at all. The division of labour has either worked too well, or is working against us. People are no longer necessarily producing content of their own but merely consuming, sharing, and generally applauding others’ works. It seems the act of appreciating something is association enough.
Take Japan for example: the issue of cocooning too much has led to a level of withdrawal never seen before digitality. A phenomenon called Hikikomori has arisen.
“Hikikomori” (ひきこもり or 引き籠もり) literally means “pulling inward, being confined”, i.e., “acute social withdrawal” is a Japanese term to refer to the phenomenon of reclusive young adults who withdraw from social life, often seeking extreme degrees of physical isolation, and typically confining themselves in the homes of their parents; has been a prominent public mental health concern in the 21st Century.
This condition affects about a third of Japan’s population, one of the first countries to move completely into digitality. A large portion of this affected population (although not exclusively correlated) are geeks known as “otaku”.
Hikikomori & Otakus
“Although people have criticized otaku for being socially inept and unable to make friends, when we consider the types of connections they do make, this is clearly not the case, especially in contrast to the hikikomori who don’t talk to anyone if they don’t have to. The otaku has a purpose and therefore an identity. The hikikomori has neither,”
Essentially geeks, otakus are largely consumers of modern visual culture originating primarily from Japan. Of all the films and comics produced in recent years, they largely consist of the same memes done in different settings and quality. It is essentially a visualisation of fantastical ideas that originate from social restrictions in Japan. An antifragile subculture.
What happens when these consumers of media turn into prosumers in the digital age? A design distilled purely from memes, amalgamated together on the Internet, is translated into the physical world? Called a “vocaloid project” Hatsune Miku is a character conceived this way.
Hatsune Miku is a character conceived by fans of modern visual culture, given a personality to go with a piece of artificial voice software. From traditional graphical art to computer graphically rendered videos, to virtual reality systems and augmented reality systems, Miku has even appeared at her own concert via a projection system. Keeping in mind Miku has no origins in the physical world, it is entirely a meme. As technological advancement progresses, the noosphere is beginning to literally overlap with the geosphere.
Conclusions
We live in the digital age, and we should pay attention to the issues we face as digital natives become the norm. Our world-view as designers should no longer restricted to the material world, but to the significance of design going forward: how a good design should not only build upon the very best practices of the past, but also reflect our aspirations to the future. Digital technology is becoming ubiquitous, so instead of discriminating them from the traditional analogous design or fully embrace it, we should find more ways to integrate them. Digital illiteracy is on the rise in developed nations, and vice versa in the third world. Popular assumptions regarding wealth and technological literacy must be stamped out. Our work should be useful and helpful to people regardless which side of the geopolitical paradigm divide they are. We however mustn’t forget that as a species, we are continually evolving, even if our physical forms hasn’t changed our psychological forms will.