Early screenshot of the project

Multi-directional real-time creative collaboration platform based on WebSockets

BSc Thesis | Middlesex University London, SAE Institute Belgrade

Marko Mitranić
Jun 23 · 60 min read

Marko Mitranić
30. June 2017, Belgrade, Serbia
Bachelor of Science Thesis — 1st Class with Honours

Plateaux.Space is open source and available on GitHub.
Hosted Live: https://plateaux.space/


Author’s Foreword

23.06.2019

Technology is really moving at an amazing pace — and so are we! When i wrote that we are “standing on the shoulders of giants” i did not expect that in just a few years time, those giants are going to be previous versions of ourselves.

A couple of years ago when i started this project, WebSockets were The Old New Thing, today they have seen an evolution in real-world applications much like what the visionary Joe Armstrong had foreseen back in 2009. It took ‘em a while Joe, but they are here to stay. With the rise of Reactive frontend methodologies and especially WebSocket capabilities moving from Node to more capable backend languages like Python, Elixir and React-PHP — developers now feel at liberty to implement WS wherever they’d like.

I believe that for a long time devs were kind of scared to implement WS in their every day applications as they felt pressure from business as well as peers, to use only battle-tested technologies and that meddling with WS just brings more complexity. This was proven very much wrong, and now you have Wordpress using them heavily in it’s admin panel, regular, non-interactive websites, doing backchannel communication in WS instead of Ajax and live-search solutions with debounced WS results.

One of the very interesting applications of WS that currently threatens to take the FrontEnd world by storm is Elixir’s Phoenix LiveView capability. It’s still clunky and works only with stateful backends created by Erlang/Elixir, but boy do i expect JS world to jump on this idea. REST no more — JS would return to dealing with FE, while all the business logic would happen on the the backend (Node maybe) like it did back in the olden days.

One of the things that i did not expect was that 3D on the web will stay a relatively unused technology in broad terms. Yes, it exists and works better than ever due to technological advancement in browser’s efficiency and our CPUs getting better and better. But, i do not see that anything has really moved on the topic of creating a better Developer Experience (DevEx these days) when working with 3D. No cool abstractions or stateless frameworks appeared, it is as hard to debug as it ever was since all JS gets compiled and even so it is hard to follow what the libraries do. JS developers in general have been so concentrated on dealing with legit issues in FE architecture and systematisation by transitioning away from BEM and into more Reactive methods, that they kind of let go of trying to make advancements in FE experimentation. It will be some time before we actually see 3D design elements such as menus, buttons etc.

In the end, this project was very fun to do, it proved to perform good, being able to handle 3 GS of traffic at peak usage. It was added to WhiteStorm portfolio of projects, and then it died for quite some time. Currently i will be concentrating on Dockerizing the thing as it does not fit into my workflow anymore, in order to get it up and running again. T̵h̵i̵s̵ ̵w̵i̵l̵l̵ ̵b̵e̵ ̵d̵o̵n̵e̵ ̵i̵n̵ ̵t̵h̵e̵ ̵n̵e̵x̵t̵ ̵f̵e̵w̵ ̵d̵a̵y̵s̵,̵ ̵a̵n̵d̵ ̵b̵y̵ ̵t̵h̵e̵ ̵t̵i̵m̵e̵ ̵y̵o̵u̵ ̵r̵e̵a̵d̵ ̵t̵h̵i̵s̵ ̵t̵h̵e̵ ̵p̵r̵o̵j̵e̵c̵t̵ ̵s̵h̵o̵u̵l̵d̵ ̵w̵o̵r̵k̵ ̵n̵o̵r̵m̵a̵l̵l̵y̵.̵ Its alive now.

The only future prospect from my side, right now, is to rewrite the server segment in Erlang.

About the project

Plateaux is a multi-directional real-time creative collaboration platform based on WebSockets Plateaux is a state of the art musical collaboration mechanism. Pull requests are welcome.

Hosted Live: https://plateaux.space/

Plateaux.Space is open source and available on GitHub.

Multiple performers collaborate in a real-time, virtual, three-dimensional music jam session, with a goal of creating a non-stop, ever-changing and looping tune. It feels like a game — In the top menu, you are presented with a selection of melodies asleep. By selecting any one, you will spawn a three-dimensional Gizmo which can be controlled by any other performer.

Plateaux was created by Marko Mitranić, as an open-source practical segment of his major BSc thesis at SAE Institute of Technology, in June 2017. For any questions regarding the project, please contact the author directly, open up an comment, GitHub issue or a pull request.


Abstract

In today’s web development climate, programming revolves around clearly defined best practices, libraries, abstractions and frameworks that allow for improved code readability, exponential growth in speed and complexity of development. This concept of problem abstraction allows for a more postmodernist approach to web development, where technologies can be mixed in unconventional manner, in order to create new experiences. On the other hand, vendor libraries often suffer from losing their purpose by not honouring single responsibility principle, which can lead to them being too complicated, having a performance loss or even changing the codebase of the project in such a way that the developer becomes “vendor-locked”.

Walking a thin line between the two, the process of making an objective decision which will provide the benefit of vendor abstraction, and thus help create an unconventional web experience, while not vendor-locking the project, is as common as it is crucial to the success of any project.

Through the practical act of developing an application based around WebSockets, WebGL, Web Audio and Node.js, it is possible to observe this process, extract functionalities from vendor libraries when needed, and draw new conclusions and solutions specific to these seldom used technologies and protocols. Raising a bar for vendor libraries by setting groundwork and offering suggestions for possible future best practices in these areas.

Table of Contents

Key Terms & Abbreviations

API — Application Programming Interface, list of standardized commands, protocols and formats used for inter-program communications. Often allow an external process of creation of applications with access to operating system functionality. It is used so that individual programs can communicate directly and use each other’s functions. (Kang, 2011)
Bootstraping — From Douglas Engelbart’s Law, the process of getting better at getting better or augmenting our capabilities at an exponential growth rate. (Bardini, 2002, p. 449)
DnD — Dungeons & Dragons, a world-famous fantasy table top role-playing game originally designed by Gary Gygax and Dave Arneson in 1974.
Idoru — アイドル “aidoru”, a Japanese permutation of the English word “idol”. Refers to manufactured or virtual pop singers and artists e.g. Kyoko Date.
JSON — JavaScript Object Notation, a lightweight data-interchange format based on a subset of the JavaScript Programming Language. (JSON Organization, 1999)
lerp — Linear Interpolate, a process of changing position over a defined line gradually, my using an alpha parameter as a modifier.
MMORPG — Massively Multiplayer Online Role-Playing Game.
MUD — Multi User Dungeon, a computer-based text or virtual reality game which several players play at the same time, interacting with each other as well as with characters controlled by the computer. (University of Florida, 2005)
MOO — Mud Object Oriented.
NPM — Node Package Manager, a revolutionary dependency manager that makes it easy for JavaScript developers to share and reuse code. (NPM, 2017)
PLATO — Programmed Logic for Automatic Teaching Operations, the first generalized computer assisted instruction system. University of Illinois ILLIAC I computer.
SRP — Single Responsibility Principle, a programming practice that states that each software module should have one and only one responsibility. Coined by Robert Cecil Martin in late 1990s.
TCP — Transmission Control Protocol, a standardized set of rules that controls establishing, maintaining a network connection and data-delivery events that happen in the meantime.
UDP — User Datagram Protocol, a less reliable internet protocol for sending short messages called diagrams between clients.
Vendor — third-party code or assets. Vendor folders are usually used to perform version control over dependencies and frameworks. (McFarlin, 2015)
WS — WebSockets, an independent TCP based computer communications protocol, providing full-duplex communication channels over a single TCP connection. (Ratchet, 2016)
WebGL — Web Graphic Library, enables web content to use an API based on OpenGL ES 2.0 to perform 3D rendering in an HTML canvas element in browsers without the use of plug-ins. (Mozilla Developer Network, 2017)

Introduction

It is a popular belief that much like interactivity was in the middle of the last decade, multi-directional real-time collaboration is the future of Web. Many developers have already recognised the necessity for a more postmodernist approach to problem solving (Purdue University, 2011) — by combining best practices from multiple already established technologies, we can discover new solutions to problems we often didn’t know we had.

In the past decade, frameworks and libraries have revolutionised web development by abstracting and encapsulating complicated concepts behind vendor solution facades (vendor libraries) which are easier to understand and implement. This process can be called “Complexity Hiding”, “Encapsulation” or “Abstraction” and represents an idea that it is possible to represent entire systems as simplified and encapsulated objects which will be treated at a “higher level”. In other words, all of the object’s data and business logic are irrelevant to the end-user, who can concentrate their full efforts to implementing the object creatively into a larger system and thus finding a new purpose for the original vendor system. (The University of Utah, 2008)

2017 started as a year where Node Package Module repository was the largest in the world, with over 350.000 packages. In May, already had the number approach half a million packages. (Figure 1) In today’s web development, it is of the utmost importance to be able to quickly switch and adapt between these libraries and domain languages, and the key part of this process is an ability to recognise which extension fulfils requirements, in the already oversaturated sea of extensions. But in order to blindly decide on a vendor library, a developer needs to be able to make a quick comparative glimpse over competition, to keep an eye on requirements, and to understand the task, protocols used and the environment, fundamentally.

Figure 1 — Raw module count comparison between the most popular global repositories. Graph created on a yearly basis, staring from June 30th, 2016. (DeBill, 2017)

When tackling a task, developers seem too often opt for the single target linear approach, where a narrow functionality is implemented, tested and published. On the other hand, tackling more than one problem at the same time, especially while mixing languages and technologies and taking the best of what each one has to offer might allow us to gain new insight into in some common problems of browser content creation over the internet.

As we have observed, audio and three-dimensional space are seldom used on the web, and this research will contribute and help educate the community. In learning and researching usage and implementation of Three.js, Node.js, Web Audio API and others, we can solidify understanding of these advanced web technologies and strive to develop new design patterns or ideas for future research prospects. We will investigate a mix of technologies and solutions that are common on other platforms such as Unity game engine, and try to employ them to conceive new ways of problem solving specific to JavaScript. While documenting the project development, we will witness the process of choosing between libraries and work on the question of physics in a web based three-dimensional environment.

As a part of this research, we will also investigate the cultural impact of this innovative possibility for creative collaboration and make a comparative analysis of other projects with a similar goal. The research will recapitulate future prospects as well as some of the questions that it might open, with the hopes that it will make a solid ground in general for future researchers of web technologies we have used and even contributors to the project itself.

The Problem

I believe that WebSockets and WebGL can be utilised and paired with an asynchronous Node.js server, in order to create an unusual channel of nonverbal communication between clients, that would collaboratively manipulate mesh object representation of audio loops in Three.js based three-dimensional space.

Research Niche & Background

In the last couple of years, we have witnessed a number of commentaries and articles such as the famous “Web Design is Dead” news story (Mashable Inc., 2015) and discussion threads asking the same question (O’Connor, 2015) that claim that web design in its classical sense, is in fact dying. The idea comes from a school of thought that values web pages that are innovative and unexpected in design, or are testing the rules of usability in some ways (Ellis, 2015) — in place of skeuomorphic or flat design wars, grid systems, content management systems (CMS), usability standards, readily available and mass-produced templates and similar artefacts of modern web design.

While there is some truth to the idea, because most of the mass media and business websites indeed do look alike (Pratas, 2015), it is actually referring to artefacts of the past from a completely different technological context. An internet where developers and designers were so locked into what was possible, that they were forced to concentrate their goals to devise new ways of interaction and attention gain. An internet with no specialised technical solutions and no underlying technologies that power systems like today’s instant messaging boards, Ajax calls and “promises”, GPU accelerated visualizations, big data, social impact, interconnected peripherals and Internet of things (IoT), mobile web, native applications, HTTP/2, semantic web, micro formats, image recognition, JSON restful API’s, machine learning etc. Historically speaking, underlying technologies have anticipated the needs of users at the time, evolved and allowed for standardised or abstracted-away means of performing these tasks. Over time, programming languages can be abstracted into Application Programming Interfaces (APIs) and frameworks, which are later further abstracted into use-case specific vendor libraries. From Engelbart’s Law, we know of this process as “Bootstraping” or augmenting our capabilities by “getting better at getting better” (Bardini, 2002, p. 449).

Every time an underlying technology changed its form, developers and even whole industries would switch to the new standardised version in a matter of just a few years (Evans, 2015). A period of rapid growth would occur, where the new technology was massively used and abused. This liminal period is truly important because it allows technologies to combine, adapt and mature into standards, or die away in irrelevance.

It is in this liminal period where true transformations and evolution happens. The period of finding meaning in technologies via pairing them together. HTML became semantic via Schema micro formats years before HTML5 (Schema.org, 2011). Content serving became abstracted away and web servers migrated to Cloud, Cache and Content Delivery Networks (CDN) (Palacin, et al., 2013) and other networks. In another example, Twitter Bootstrap v1 creators intended (Twitter, 2011) to solve only the basic styling and grid templating needs, but once paired with jQuery and FontAwesome it became a widely accepted framework (Burge, 2015). In a similar manner, by learning from failures of others and using multiple technologies in an innovative manner, 888 WebComponents were created (WebComponents, 2017). By creating NPM (Node Package Manager), NodeJS sent ripples through the industry as Engelbart’s “Bootstraping” had shown its true power as can be seen by sheer approximation of NPM repository module count which resembles an exponential growth function when aligned chronologically, the same can be concluded in case of PHP’s Composer Packagist judging by the data presented in Figure 2.

Figure 2 — All time module count trends. (DeBill, 2017)

Through this same liminal process of technology pairing, this research will navigate a process of developing an innovative solution and a new web-based experience by documenting the process and creating a system by pairing WebSockets with multiple other vendor solutions like Three.js and Node.js, their derivatives and plugin ecosystems.

Project Goal

The goal of this Bachelor of Science research thesis is to problematize the conventional usage of the WebSockets protocol through research performed as a part of a practical project based on unconventional pairing of WebGL with other libraries. We will follow and document the process of creating a web based three-dimensional space named “Plateaux” in which multiple users could participate in a creative production of work — a virtual music creation session. Users of the application would collaborate in real-time by manipulating three dimensional objects (Gizmos) that visually represent audio samples. Audio samples will be simple abstract pattern sounds created specifically as a part of this project by musician Srđan Popov, with the idea to be used as predefined building blocks. Users will discover that by overlaying multiple of these audio samples they are able to create an endless, ever-changing musical composition, static in terms of content diversity, but highly agile in terms of resulting melody. A “Plateaux“ where the web application user unknowingly becomes a composer and leaves his mark by taking part in a creation of an “open” work of art as defined by Umberto Eco (Eco, 1989).

In order to create the best possible atmosphere for future prospects and third-party research, the resulting code of this system will be published on GitHub as Open-Source and under standard MIT licence, with rights for reuse and modification with attribution. After all, I hope that this project will be a lasting contribution to the Web Development industry by proposing solutions to common programming problems, and even presenting us with some philosophical user experience questions and through allowing future researchers to potentially create further UX research or tests based on my project and findings.

Technical Goals and Limitations

We will define some user stories in order to grasp a better understanding of goals and limitations for the application. This is the first step when dwelling into application logic and its possible solutions.

“Plateaux” application should feature a static splash screen where the participant will be presented with a basic usage ruleset, explanation of functionalities, and a link to GitHub repository.

The participant will be assigned into a virtual “room” (GS) shared with several (up to three) other clients, who are able to observe and control the same three-dimensional objects (Gizmos). Size of the room (maximum number of participants) will be fixed to four and may be revised later, if there are observed performance drops.

The participants will be able to move Gizmos by “grabbing” (clicking and holding) them, and dragging them over to another location on screen. When a gizmo is in the picked-up state, it will be visually represented as such to other users, who will be unable to influence the Gizmo.

Dropping the gizmo from the drawer, onto the canvas space will activate an animation similar to orbit gravity pull, and will start playing the appropriate sound loop for that gizmo.

Each mesh will represent a single sound loop, and will feature an iconographic visualization of that tune. Tunes are organised into three colour coded categories based on their musical nature, as defined by the author of the samples.

The orbit will be slightly tilted in the start position, so that all users can clearly visualise the particle movement. Camera will not be fixed, and will use orbit controls with zoom, centred around the static planet.

If a gizmo is being held by a user, other users will not be able to influence the Gizmo until the “mouseup” event occurs. At this point, Gizmo will be animated back to orbit, and it may be picked up mid-flight during the animation.

If a gizmo is being held but is not changing its position, no messages will be emitted to the server and other members of the GS, in order to prevent message overflowing.

No authentication will be needed for accessing the server, and the server will not feature an Application Programming Interface, which might be considered for a future prospect. Server will handle only the basic message transmission and validation functions, with as little logic as possible while still achieving the result.

The question of Legacy support

The project, being an experiment in its nature, offers purely technical level of solutions and does not include usability testing, personas or other tests on human subjects. All of the aforementioned techniques will be considered in more detail in the “Future prospects” section of the research.

Additionally, it means that it will be treated like a Chrome Experiment would, in terms of legacy technology support. Only modern versions of desktop Chrome will be supported, with other browsers including mobile being allowed, but not thoroughly and specifically supported. Babel libraries will be used to transpile the code from ES6/7 to standardised ECMAScript 5 which runs in the browser natively.

Three-dimensional Web

On the low level, it is WebGL that allows us to render three dimensional scenes in real time in the browser, but it was the library Three.js that popularised the very idea that it can be comfortably used, by providing us with some pre-written WebGL shaders. Three.js is almost 7 years old at the moment with itsinitial release date being 24. April 2010, but took quite some time before it gained production value, stability and more importantly traction in web development community. It wasn’t until the release of WebGL 1.0 in March 2011 that the first art projects and games started showing up. Today Three.js has over 33.000 stars and at least 11.000 forks on its GitHub page (Cabello, 2013) while its homepage features over 140 different first-class projects (Three.js, 2017). It has been encapsulated many times into many plugins and libraries that offer specific improvements.

The largest plugin platform for Three.js is ThreeX which offers 48 gaming-specific plugins for abstracting away common complicated technologies, algorithms and concepts like pathfinding, DOM events or collision calculation. On the other hand, by inspecting their corresponding GitHub repositories, we can see that most of ThreeX plugins seem not to be updated in over two years, and that there is little documentation to most of them. In comparison Whitestorm.js comes out as a more modern framework which tries to abstract away most of the work related to operations and transformations of the three-dimensional objects, in order to make three dimensional implementations mainstream by providing developers with a React compatible state machine which can be implemented for real-world content creation systems (Mozilla Developer Network, 2016).

Audio on the Web

Apart from literally streaming audio tracks or as a complementary to video, audio is seldom used on the Internet, at least in terms of being used in innovative ways as an user interface tool, application logic core, result of a program execution or collaboration. Those innovative capacities of the format are far more visible and regularly used on websites for vision-impaired, but found little use with the wider audience. Bleeding edge movements like Google Magenta, the first artificial intelligence (AI) that wrote their own song (Eck, 2016) or various Chrome Experiments have recognised the lack of use for the format in this sense, and have often tried to implement audio as application core in art projects. Websites like Pictoplasma use audio as an indivisible part of the visual representation of work (Kamp, 2012). Radioooooo, has for example managed to build the whole purpose and interface around audio as itscore functionality (Troubat, 2013). As people spend more and more time commuting and are on their mobile devices most of that time, one of the more prominent uses audio in mainstream web in the last year was the massively popular addition of blog audio tracks to accompany the article. There are more examples of creative audio use, which we will revisit in detail, in the comparative study section of this research.

Being seldom used on mainstream websites, compared to visual and even tactile functionalities, there are not many audio handling browser libraries. World Wide Web Consortium (W3C) has an approved standard for Web Audio API since at least 2011, with the last draft dating to August 2015 (World Wide Web Consortium, 2015). Browser support for the API is not on par with that of WebSockets, but is fairly strong, with all the modern browsers having it implemented to some degree and some browsers still requiring prefixes (Can I use, 2017). Additionally, historically speaking and in contrast to images and text audio files do not offer any additional semantic meaning to webpages, similarly to video files and flash objects, which led SEO agents to advocate avoiding the format for a long time.

This research will try to leverage Web Audio API to some degree, through some of the most popular open-source JavaScript libraries based on it. From its specification, the API was designed with an intent to be used and paired with other web APIs like “2D Context” and “WebGL” graphics API, which complements the goal of this project perfectly (World Wide Web Consortium, 2015).

Theoretical and Methodological Framework

Research Methodology

We will perform a mixed research, combining different research methods in multiple inter-connected parts. A semiotic textual analysis and case studies, linear historical grounded research, comparative analysis and quantitative experiments, all with a clear goal to familiarise us with the technologies, other works and their repercussions, and enable us to take informed action while creating the “Plateaux” project as a result of an action research.

Starting off with a grounded research in history of technologies that are mentioned and used in the rest of the paper may provide a powerful insight in the decisions that the original authors and consortiums made while creating these technologies. Understanding the goal of the implementation of protocols like WebSockets or vendor libraries like Socket.IO and history of Groupware itself will help with placing the project in a social and technical context, and thus finding what some related projects and initiatives, that will be further comparatively analysed, are. It will also help in explaining in more thorough terms what the goal of the project is.

A comparative study is dual in nature, as it will be performed for comparison of these prior art projects in terms of being analysed for similarities, innovative technical solutions, vendor libraries and even clues for usability experience, but also when doing a technical comparison between the libraries and even native technology implementations like is the case with Socket.IO and Whitestorm.js. Doing these tasks as a comparative research will allow for establishing common ground based on defining and in some cases determining exact goals. The comparative study will be interdisciplinary in nature itself, as sometimes it may be based on textual analysis and existing case studies or even practical experiments as is the case with client side WebSocketsconnection performance. In this particular case, the decision to use native WebSockets implementation instead of a world-famous vendor library was objectively justified by a comparison based on both the process of grounded research and practical quantitative benchmark tests.

At a later moment in the research, enough data will be accumulated to allow for action research surrounding the creation of the web application itself. Action research is defined by Association for Supervision and Curriculum Development as a disciplined process that assists in improving and refining of the creation process by using the data gathered during the process itself. (Sagor, 2000) Development of the application will present us with additional logical and semantical problems, which can only be solved by learning more about specific best practices in those situations. Unfortunately, as WebGL is underdeveloped, and there is often little to none documentation for libraries that abstract it, some problems do not have a clearly defined set of best practices in JavaScript environment. These situations will require that the research goes further than JavaScript community, and into other languages and frameworks.

With the project and research done, we will have enough experience and knowledge on the topic in order to expand into advanced research of social and cultural impact that groupware collaboration on artistic and practical project might have. This segment will be augmented by semantic textual analysis of literature, lectures and other academic sources, and will bring the paper to conclusion.

Procedure & Technical Research

Concept of multi-directional communication in computing

On December 9th, 1968, as a part the presentation of his “oN-Line System”, Douglas Engelbart presented never-before seen concepts of “Augment Televiewing” and “Groupware”. (Doug Engelbart Institute, 2017) These concepts have been implemented numerous times in the modern internet, most notably as Virtual Network Computing (VNC) protocol and Internet Relay Chat (IRC) respectively, and their influence can be found in this research paper.

By simply examining the release dates of real-time collaboration softwarecrowd sourced at Wikipedia (Wikimedia Foundation Inc, 2017), a rising interest in Groupware solutions as defined by Lotus (Lotus Development Corporation, 1995), based on Open-Source technologies can clearly be noticed and a level of technical matureness that allows them which are both a clear sign that there exists an ever-increasing need for deeper networking and collaboration. If we analyse the list further, it is clear that most of this software is productive in nature and trying to solve seemingly corporate needs — Word Processors, Code Editors, Content Management, Game products etc.

Multiplayer Online Games (MMO)

The contemporary idea of Groupware is not new on the modern internet, but before these, now common, technologies allowed it, developers of applications and games had to use UDP, a protocol not intended for open communication over the internet, or create Network sockets based on implementation in C (Microsoft Inc, 2017), it’s Java alternative xSocket Core (Roth, 2011) or similar technologies. Those could only be paired with server and client applications written in C, Java, .NET, Cocoa and similar native or proprietary languages for which compiled binary executables and dedicated ports were often needed. In order to be used from within the browser, a combination of Java Applet based xSocket Core with the now-dead Flash platform was often used (Lung, 2008). Additionally, the lack of developer experience, technology support and design patterns all slowed down the process of fundamentally understanding asynchronous servers (Martin, 2008, pp. 178–182). Because of all this, a high entry knowledge level was required, and apart from the IT segment of Games and Entertainment industry invested in Flash-based mini games, companies seldom decided to follow this path.

Server architecture through history

First recorded multi-user environments were 1975 Multi-User Dungeons (MUD’s) created on University of Illinois “Programmed Logic for Automatic Teaching Operations” system (PLATO), with gaming and educational programs such as pedit5, Panther and dnd (University of Florida, 2005). Most of which were based on mudlib (MUD Library) which were an adaptation of the BCPL input/output library and on a far lower level of abstraction than today’s servers. (Bartle, 2003) Early multi-user libraries had set up the groundwork, and for the next twenty years a wide variety of MUD libraries like AberMUD and TinyMUD followed with their own descendants, each trying out their own features and solutions to common problems. Main issue with servers that manage multiple parallel clients was lack of advanced asynchronous code design patterns. In this context, the first MUD system with a notable number of features was Tinytalk by Anton Rang in January 1990. The client gathered a large following after recognizing the communication complexity and replacing Telnet infrastructure of TinyMUD behind an abstraction. (Rang, 1990)

The progress continued during 2000s in the form of MMORPG games, web based chat and notification systems, POS systems, distributed server networks. Servers still lacked the polish we have today, but this was indeed a time of prosperity as architectural solutions that will mould the internet today were in fact defined during this period. (Martin, 2008, pp. 317–348) A long period of time has passed between setting up the first MUDs and object-oriented MUDs (MOO) games, and fundamentally understanding whole server network systems that games such as Second Life or World of Warcraft use. By a trial and error period which lasted more than 30 years, architectural issues like real-time debugging, clock timing, memory efficiency, security, network lag, modularity etc. were solved and best practices created.

On May 27th, 2009, all the conditions were met and the world met Node.js. Ryan Dahl, the creator of Node.js, was aiming to create standardised support for real-time websites with push capability “inspired by applications like Gmail”. One might argue that most of the technologies already existed for a long time, but as Tomislav Capan wrote in his article published on TopTal engineering blog “in reality, those were just sandboxed environments using the internet as a transport protocol to be delivered to the client”. (Capan, 2013)This is the moment when the abstraction level and feature bootstrap got to a point that attracted an enormous amount of developers as we saw in Figure 2. Ryan presented developers with a completely new non-blocking, event-driven I/O server paradigm.

Based on Google’s V8 JavaScript engine, Node promises regular speed improvements by virtue of constant upstream improvements and patches to the underlying technology. For the first time in web development, the barrier between back-end and front-end of an application was blurred as both used JavaScript and similar coding styles. Node.js runs on standardised and widely available technology stack, most of the servers support it out of the box, even in local development environment, which additionally sped up the adoption rate. Being very lightweight, Node.js uses an event-driven architecture, so every call and operation can be traced as a single chain of asynchronous callbacks, which in turn allows the server to run on a single thread, instead of spawning new threads per-client request as is the case with other server stacks like Nginx and Apache. This is the very foundation of the non-blocking I/O nature we mentioned earlier. In this respect, it is intended to be combined with other technologies like WebSockets. For the first time, a specific server technology, was designed from the ground up, with the goal of making the web a more interactive place.

Contemporary developments

A new wave of socially powered web based Groupware often based on Node.js is already making a silent revolution in communication by bringing the technology closer to everyday use, and has the potential to solve a wider variety of needs. Generally, these applied services “reinvent the wheel” through offering new communication methods and mediums. Examples of this can be clearly seen in the cases of Facebook VR platform that replaces the well-known Internet Relay Chat with a three dimensional and omnipresent space in which participants act as avatars of themselves (Facebook Inc, 2017) and Google Docs platform that offers indirect real-time collaboration on textual documents or other connected services such as mind maps, data processing sheets etc. We can even notice the interest in this type of networking through the prism of ever-high popularity of technologies like Node.js and WebGL (Figure 3) that have a historical significance in bringing real-time web applications closer to developers by creating an abstraction above various platform or protocol support algorithms.

Figure 3 — Google Trends graphs for three common technologies Facebook is involved in since F8 2017 conference.

Programming in itself has become a Groupware collaboration over internet, via ever so popular versioning systems like Git and SVN. Furthermore, the largest provider of Git service, GitHub has over years built a strong community and feels more like a social network than a crucial part of the development process, that it is. Even though these services are still turn-based, more and more real-time collaboration software is following the sudden availability of multi-user implementation through apps like Gobby, Etherpad, editor tunnel plugins like Floobits or Emacs Collaborative modules (EmacsWiki, 2014), and even Unity scenes via Kinematic Soup Scene Fusion plugin.

Introduction to WebSockets and Three.js

A brief history of WebSockets

In short, WebSockets is an advanced protocol for communication between a server and a client. It makes it possible for a client to have an event-driven, persistent interactive communication session with the server, over a single TCP connection, instead of a plain HTTP request based communication. (Mozilla Developer Network, 2017) It is a bi-directional, full-duplex, long-lived and message based transport layer, which in its most basic form, creates a long-lived message communication channel, so that the client can actively listen in to events or messages triggered from the server side. This approach to communication is quite different from static HTTP where any new information would have to be explicitly requested from the client side. The WebSocketstechnology is what allows web applications to become truly interactive, as all HTTP based solutions require repetitive requests, require specific request control, vendor libraries or proprietary code and usually have “expensive” latency as a side effect.

As we have seen, the WebSockets are not a new idea, and the first proposal can be traced back to 1971 proposal draft RFC 147 of the Network Working group from MIT. (The Internet Engineering Task Force, 1971) With Internet Engineering Task Force (IETF) having finally accepted the document RFC 6455 standards proposal titled simply “The WebSocket Protocol” in 2011, which is widely implemented today. (Internet Engineering Task Force, 2011)

WebSockets perform best in situations where low latency and bi-directional server-client communication is critical. When making a comparison, we should keep in mind that HTTP protocol tests and documentation often use the 48-byte handshake size of ping program, or 200 byte requests documented in Chromium’s SPDY research. (Google Inc, 2012) Nevertheless, this data size is not realistic in real world applications, because browsers send a large overhead containing cookies and client data which can get to more than 2,5kb in size. While both HTTP and WebSockets roughly have equivalent initial connection handshake sizes, WebSockets require only the initial handshake, and all subsequent WebSockets messages contain only 6 bytes of overhead (2 for the header and 4 for the mask value). (Internet Engineering Task Force, 2011)

{
"name": "gizmo_3",
"status": "gizmoLerp",
"angle": 16.215679180562645,
"distance": 189.22229183811163,
"elevation": 1.5394624484165007,
"position": {
"x": -121.01853303699522,
"y": -30.18991428113992,
"z": 7.547478570285023
},
"lerpTo": {
"x": -165.3532752913974,
"y": 0,
"z": -91.99657645192471
}
}

The JSON shown above is a mock object of a lerping Gizmo that will be used during the production. Its length is 272 characters as reported by Chrome DevTools Network profiler, and we are emitting at least ten of these events per second (depends on the network connection). Judging by this data, it is safe to say that the architecture of “Plateaux” application requires near-instantaneous bi-directional communication with large amount of tiny data chunks such as mouse and Gizmo coordinates and Gizmo states, so it makes perfect sense to combine best of both worlds and use HTTP request for initial page and asset preload, and then create a persistent WebSockets connection with the server, which transports messages between clients. There will be no model and data store, so there is no need for data manipulation robustness of server-side languages like PHP and Swift. Looked at in this light, the very idea of real-time non-blocking asynchronous communication with the whole of the data kept in temporary memory, is the very idea of what Node.js was about, so the functionalities of both technologies seem perfectly suited to this project.

Using Vendor WebSockets Libraries

The question that immediately rises when talking about libraries in any capacity is if there indeed is a need for any. This often-overlooked question can easily be a root of bad performance or bad quality code later on and it is an important topic to cover early on. Even though developers might find themselves automatically tempted to use popular libraries such as Socket.IO because they claim a clearer syntax and easier feature APIs, we should first raise the question of do we value this additional functionality more than the trade-offs of using a vendor library. In order to make the decision at this point, we should do some more research on the historical implications of the libraries in question, and the exact feature sets and implementations they offer, and finally compare those with the needs of our application.

Since 2010, even before WebSockets standard was finally accepted by the IETF, Socket.IO has come a long way. Under the banner “Sockets for the rest of us”, the creators initially intended to provide an easy to use WebSockets facade, which would simplify the implementation of WebSockets on the server side by abstracting away the code necessary for fall-back solutions such as Flash, forever iframe, XHR long polling, XHR multipart encoded etc. on older browsers that did not support WebSockets protocol at the time. (Socket.IO, 2012) Today, the fall-back is far less needed as all major browsers contain full support for WebSockets (Figure 4), so the community that surrounds it had changed direction and concentrates efforts to providing both server side and client side with a similar fail proof API layer. It quickly gained popularity on GitHub with 32,924 stars and 6,291 forks, and became the “fastest and most reliable real-time engine” (Socket.IO, 2017) which is implemented widely in advertisements, document collaboration, messaging apps, push notifications, Ad Blockers and even news readers and had implementations in Java, PHP, C++ and Swift.

Figure 4 — Graph depicting the wide browser support in 2017. (Can I use, 2017)

When talking about front-end client-side implementation, the difference between Socket.IO and native WebSockets support is not that clear. Socket.IO claims to allow for a clearer syntax, and implements multiple additional technologies that add support for legacy browsers. Since this research project will be treated as a “Chrome Experiment”, which means that no legacy support will be used, and Socket.IO is 182 kilobytes in size, mostly tanks to legacy support, we can safely ignore this feature set of Socket.IO. Omitting such a large library will definitely cause performance improvements, not only in the page size itself, but in every emit not being parsed through 180 kilobytes of JavaScript anymore. Let us look at some code samples on a most basic level of client-side implementation:

var sockets = new WebSocket('ws://'+window.location.host+'/');

sockets.addEventListener('error', function (message) {
​ console.log("error");
});
sockets.addEventListener('open', function (message) {
​ console.log("websocket connection open");
});
sockets.addEventListener('message', function (message) {
​ console.log(message.data);
});

As we can observe, native WebSockets implementation uses regular “vanilla” JavaScript event listeners and the interface behaves as an instanced object with which we communicate through its methods.

var sockets = io();
sockets.on('connect_error', function (message) {
​ console.log("error");
});
sockets.on('connect', function (message) {
​ console.log("socket.io connection open");
});
sockets.on('message', function (message) {
​ console.log(message);
});

In the Socket.IO example above, we can notice the exact same behaviourpattern in that it is acting as an object with its API in a form of method state controls. The main difference in the code is that the listeners are funnelledthrough an .on() method abstraction, similar to what jQuery uses. This is done because Socket.IO uses custom listeners which might (or might not) trigger proprietary or legacy code, depending on the client configuration.

Considering that this project does not offer legacy support, we can concentrate our efforts to bringing highly readable code while striving for the best performance. As far as code readability and cleanliness is concerned, since WebSockets implementation uses native listeners and requires no overhead, it is clearly the way to go. If we recreate some of the network tests performed by Rafał Pocztarski as part of his “WebSocket vs Socket.IO” research project, there is indeed a slight difference in the initial handshake size, as the documentation suggested. (Pocztarski, 2016) WebSocketsimplementation requires only two connections with the second one being 0-byte “HTTP/1.1 101 Switching Protocols” response that happens immediately after load, both totalling 1.8 kilobytes of transferred data during a 14-millisecond load time. (Figure 5) On the other hand, Socket.IO made 6 connections totalling 73 kilobytes with minified and compressed library and 181 kilobytes with an uncompressed version. Apart from the sheer size difference, the 6 requests from Socket.IO had a combined load time of 0.27 seconds on local machine.

Figure 5 — Tests performed in a local environment on macOS 12.1 based on Rafał’s Node test. (Pocztarski, 2016)

To breakdown the requests and gather some insight into why Socket.IO is doing this

  1. The HTML page itself
  2. Socket.IO’s JavaScript (180 kilobytes)
  3. First long polling AJAX request
  4. Second long polling AJAX request
  5. Third long polling AJAX request
  6. Connection upgrade to WebSockets protocol

As co-founder of Flood IO, Ivan Vanderbyl puts it, it seems that Socket.IO is having some straightforward Single Responsibility Principle (SRP) issues. It seems that it is trying to do too much all at once. It has already even been split into two libraries because of this problem, Engine.io which powers the socket abstractions, and connection management — and Socket.IO, which handles reconnection, event emitting, and message namespacing. The winner of this comparison is clear, there is no objective reason to use Socket.IO, as a part of our client side code.

On the server side, we do not have a native V8 implementation like we do in the browser, luckily many Node.js modules are available for this purpose, and almost all perform the same basic WebSockets serving logic well. Some of them like Socket.IO’s backend module offers additional functionality which can only be utilised when paired with a specific client side library, as is the case with Socket.IO capability to run Adobe Flash/Java based socket connections. In this sense, Socket.IO shines, as it brings lots of abstracted logic such as room systems, client tracking etc. As was the case with client side, we will pick one of the most “simple to use, blazing fast, and thoroughly tested” WebSockets server implementations — websockets/ws (Stangvik, 2011) which was suggested by many authors including Daniel Kleveros in his 2015 article “600k concurrent websocket connections on AWS using Node.js”. (Kleveros, 2015) The module intends to provide the best performance with a very limited feature-set that supports only basic operations and APIs as shown when compared to Chrome and Jetty in benchmarks. (Stangvik, 2016) The module offers both client side and server side implementations, but by “clients” in this sense, the author considers other Node applications which are not to be confused with traditional browser-based clients. For browsers, developers are expected to use native WebSockets handling. Let’s look at a very basic example of message response:

var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({port: 3000});
wss.on('connection', function(ws) {
​console.log('A new Client is connected!');
​ws.send('Hello friend!');

ws.on('message', function(message) {
console.log('Received: %s', message);
});
});

If we examine this simplest possible implementation of message sending presented on websockets/ws GitHub page, (Stangvik, 2011) the code is clean and simple with easy to understand variable names and method parameters. There is no method chaining and callback structure is very simple to that end it resembles a functional waterfall coding style where a script starts with declarations and high-level functions, and moves on to lower and lower level operations as the script continues. This is important as it greatly simplifies the boilerplate code that will be used for communication and leaves the business logic of the app under the spotlight.

Opting to use websockets/ws library means that we will have to write some of the functionality that comes natively with other vendor libraries, but if we consider how robust those implementations are, and how little of their feature set we will use, it is safe to say that websockets/ws is the way to go. Basically, we will have to manually handle client disconnection events and keep alive communication by creating the “pong” listener and connection terminator, which are already well documented in the library README.md file. Apart from that, each emit targeted at a specific room (GS in our case) will have to loop manually through all connected clients like the following:

function emitToGS(author, message) {
server.clients.forEach(function (client) {
if (client.gs === author.gs) {
if (client.id !== author.id) {
client.send(message);
}
}
});
}

By the process of ground research, benchmark tests and elimination based on functionalities, the objective decision is made to use native WebSocketsbrowser implementation for the client application, and websockets/ws implementation for the Node.js server application.

Comparative analysis of prior art

When analysing prior art, the most advanced programming project in musical domain first comes to mind. Google Magenta is a TensorFlow based neural network which can be trained with audio based ruleset and datasets in order to create completely new music. (Eck, 2016) While the project is still in its infancy, and is technologically more complex than the scope of this research, its philosophical footprint is more important than its technology or even performance. Magenta is trying to create a real next-generation Kyoko Date or Hatsune Miku, world famous japanese virtual pop singers, and is presenting us developers with the question of artistic authorship, and the very nature of art in itself.

Google’s interest in audio as an underdeveloped medium of communication on internet does not stop there as they have repeatedly promoted and implemented upgrades to native audio systems such as Web Audio API for Chrome browser and shown us how they can be used in practice and combined with existing standards. (Bidelman, 2017)

Chrome Music Lab is another set of Google efforts that implement audio into three dimensional and two dimensional interactive canvases in the web.(Google Inc, 2017) Sound Waves and Spectrogram projects offer a visual representation of the audio sample being played, as a mathematically calculated waveform. Kandinsky project offers a more advanced and interactive canvas which allows for gizmo creation which is always followed by an adequate audio sample based on the shape of the drawn gizmo. The idea for different three dimensional meshes comes from here. Much in the same way, Plateaux meshes will use shapes in order to represent audio samples.

Tools like Annyang use this API to create advanced listening capabilities that can parse voice commands into directions for the web based program.(annyang!, 2015) The tool is inferior to today’s neural network based digital assistants like Siri, but is entirely created in JavaScript and sends a strong signal that advanced manipulating audio on the web is possible.

Moon art project created by Ai Weiwei and Olafur Eliasson in 2013 is a clear parallel to the Plateaux application, with the main difference being that the medium of creation is visual in nature while Plateaux employs sounds. They have created a planet (or a satellite) which contains an ever-developing surface composed on a micro level out of thousands of art pieces created and submitted openly by any visitor. On a macro level, these pieces lose their meaning and become completely different entities, composing and visualizing their own stories. The project was created in WebGL using Three.js library.

Agario has quickly become one of the most popular online games. Based in a two-dimensional vector space, thousands of users are placed in the same GS, in a role of an amoeba-like creature within a virtual petri-dish. There is no official information on the game’s engine, but by analysing huytd/agar.io repository on GitHub, we have some insight into how the architecture is made.(Tr., 2015) The client-side code does not bear any business logic, as it can be easily manipulated, instead it only takes care of emitting the current mouse position on every renderAnimationFrame() call. Compared to our technique of not emitting positions when not in use or when there is a large delay on the client, we will achieve a huge amount of network optimization, which should make up for the additional weight of rendering in Three.js instead of plain Canvas like Agario uses. Agario client-server communication model is so simple that it only has three main communication events — Movement, Eating food and Consuming others. This means that server bears most of the communication, while clients performance is improved.

Portfolio website presentation of web developer Cankat Oguz is an interesting case as it uses both Three.js and Web Audio API in order to achieve a movie effect from the user’s perspective. By using Stemkoski’s mouse-over algorithm, he achieves an unexpected level of interactivity for a movie. (Stemkoski, 2013)Users are able to click on three dimensional models, and interact with them while using them as menus.

Inspirit experience from the “Unboring studio” puts the user into a centre of pre-scripted three-dimensional low-poly world, where they follow and interact a limbo-looking character through a puzzle requiring them to rotate the camera and issue commands for the character. Audio is omnipresent in nature, with sudden sprouts of activity that accompany player actions. The project was created in Three.js with advanced renderer modifications in order to achieve the needed grayscale effect. (Unboring, 2015)

A Soft Murmur website is a really simple single-user project based on an older Rainymood website concept. It offers ten different abstract or background sounds including white noise sounds, which can be intertwined into a soft murmuring background noise. The technique of multiple overlaid audio samples is similar to the basic idea of this project, but the level of complexity is lower as we will incorporate two different sound states (active and static) and the sounds themselves will be completely synthesised, without the pretence to recreate nature sounds. In doing this, we will escape the domain of noise generation and enter the domain of musical composition and avoid the sound overflow issue stated by a now unknown Redditer back in 2015 “Nothing quite so relaxing as sitting in a coffee shop in front of a fireplace with a TV turned to static and a hurricane”. (Anonymous, 2015) Implementing the sounds in a physics based ruleset of the three-dimensional world will put an additional spin to the process.

Radiooooo project offers a glimpse into a real audio centred project. The whole of the website functionality and purpose swings around the practical task of music discovery. (Troubat, 2013) Users are presented with a world map and a decade slider, which they use to navigate music genres through time. The concept is very simple, and implements a large library of music pieces streamed over a dedicated local server. Similar projects include

OfficeBeatz which offers a continuous stream of downtempo and atmospheric popular music for discovery in “Lazy office afternoons”. And of course its WebGL based predecessor Radio.Garden.Live which offers a large array of local radio station streaming in real time, ordered on a virtual earth like sphere, based on geolocation data sourcing of the stations. (Studio Puckey, 2013) These projects allow for musical discovery, but lack both the creative and social components of the Plateaux project.

The Audio component

One of the most prominent JavaScript audio libraries, Howler.js, allows for use of audio sprites, HTML5 audio inclusion instead of regular XHR requests, simplified API and even supports spacial audio when combined with WebGL/Three.js, an environment very similar to what this project is researching. (Simpson, 2017)

There are other popular JavaScript libraries for audio handling in the browser such as Create.js, Sound.js or Waud.js but they all fall short in terms of performance and feature set in comparison with Howler.js. There are more advanced libraries like WebAudioBox or rserota/wad, but their capabilities surpass simple audio sprite control and enter the domain of real-timemathematical audio synthesizing which is outside the scope of this paper.(Serota, 2016)

iOS, Windows Phone and some Android phones have very limited HTML5 audio support. They only support playing single file at a time and loading in new files requires user interaction and has a big latency so the project will not pay attention to actively supporting those, but may work on them. Desktop browsers on the other hand have support for active audio playback on multiple files including preloading, which may cause render frame and performance drops considering the amount of overlaid active audio samples the project uses. To overcome this there is a technique to combine all audio files into single file and only play/loop certain parts of that file. zynga/jukebox is anaudio framework that uses this technique. (Tiigi, 2015) Unfortunately, the nature of the project prevents us to perform this common sprite optimization technique. If audio sprites fall out of the question, we might be able to use native JavaScript for audio handling, and gain some performance, our samples should be played independently from each other, but depend on each other to be preloaded.

Luckily, Howler.js is the only library with built in support for this particular use case. Howler’s sprite functionality allows for multiple parallel instances of the same audio sprite as seen on the projects Sprite example page, where user is able to play multiple sections of the same audio file in parallel. This powerful feature, combined with advanced preloading techniques make a great case for using Howler.js.

In order to use an audio sprite with Howler.js, the file has to be specifically optimised with dynamic break between the sprites that ensure every sprite starts on a full second mark as seen in Figure 6. On the library documentation page, authors have left a suggestion to use a sibling library for generating audio sprites. (Tiigi, 2015) By installing the plugin as a global npm module and issuing the following bash command, the script will parse all .m4a files in the local folder, and output a range of audio formats with a specially formatted JSON file for Holwer.js sprite specification.

Figure 6 — Audio sprite waveform generated via Transloadit. (Transloadit, 2017)
audiosprite --format="howler" --output="output" *.m4a

Implementing the Howler audio control was probably the most straightforward process in the building of this application. We create two native event listeners, which we connect to events that Gizmos emit when they enter or leave the sleep state, and activate a specific sprite with each event.

document.addEventListener('gizmoSleep', function (e) {
sound.fade(1, 0, 3000, soundIdsCache[e.detail.name]);
});
document.addEventListener('gizmoWake', function (e) {
soundIdsCache[e.detail.name] = sound.play(e.detail.name);
});

Aesthetics of interaction and Mesh Creation

Interacting with three dimensional objects via a two-dimensional medium such is the screen can be confusing for the participant. In forcing Gizmos to use gravity-like lerping animation and movement physics familiar to humans, the learning curve and interaction gap are shrinked.

Application Design

Visual application design is directly based on an example prototype environment created by Whitestorm.js community, named simply “design/saturn”. (Whitestorm.js, 2017) As we can see in the example, they have created a central planet object, and a plethora of randomly generated asteroids forming a belt in the orbit. Except for the built in Three.js support of OrbitControls, the demo is not interactive in any way, and it will require some major code restructuring to create Gizmos and Events system, materials control and server synchronization between clients.

Since the design employs a low-poly approach, the application will be easier for the client computers to render in real time. Spheres are especially problematic since they require a large polygon and face count, but in this low-poly world, we can simply use the Whitestorm.js default values with 8 and 6 width and height segments respectively. Visual representations of gizmo types are available in wireframe in the Figure 7.

Figure 7 — Proposals for Gizmo shapes rendered in real time in the browser.

The visual identity of the application will be based around this premise, by implementing #FF4E00 (the Planet’s orange surface colour) as the main colourof annotation. Fortunately, LogoDust design teams offered their collection of rejected logotypes for developers to use and modify for projects just like this, and logo #32 closely resembles the “Plateaux” planet. (Fairpixels, 2016) After some design changes, colouring into shades of #FF4E00 and adding textual representation of the application name in Open Sans font, application logo was ready.

Action orbit

Action orbit, will be a ghost orbit that coincides with the asteroid belt orbit, on y:0 coordinate, and will contain an array of Gizmos. Slightly tilted towards the camera at 0, 100, 400 in global coordinates, similar to one used in the Three.js editor, and as seen on Figure 8.

Figure 8 — Planet orbit plane as seen from the angle of camera at position (0,100,400)

Mesh and Audio correlation

Every Gizmo will be represented by a texture which serves an iconographic function, in order for users to be able to differentiate Gizmos and this the audio segments more easily. Since we are using simple base shapes for gizmo mesh, applying textures to it can be done at the time of Gizmo instantiation:

particle.material = getMaterial(material).clone();particle.material.map = WHS.TextureModule.load('assets/spider.png');

Game Physics

Earlier prototypes of the application included two “action plates” with realistic physics as seen in Figure 9, and even a two-dimensional turntable based on pure CSS. But in trying to recreate realistic physics and collider events, we have discovered that writing them manually, or using vendor libraries like Oimo.js or Cannon.js, which are full JavaScript conversions of OimoPhysics engine originally written in ActionScript 3.0, only made things worse and less familiar, as can be seen in Figure 9. While discussing the topic of why this poor experience was the case, SAE Institute Belgrade lecturer Marko Krmotić said: “Gaming companies put tens of millions of dollars a year into creating a more lifelike physics experience. Though some of the libraries we have seen here do a really well job, we are simply not yet at a point where the experience is lifelike, and thus, it is counterintuitive.”

Figure 9 — First working prototype of Plateaux application, with realistic Physics based on Oimo.js and available at the Git commit: [871a3f64108a6a840abcfaa73d8c644cd6580b98]

Because of this, we will limit the physics to only what is necessary. In essence, by learning from CSS based UI development, we will mimic what the user expects a realistic physics engine to do, with simple animations. This way, we can define a few predefined Gizmo states early on and base our application around the state element is in and not the user input as we have tried in the earlier commits. For example, instead of recreating physics of an area-of-effect (AOE) gravity belt, which is painstakingly hard to do even in dedicated gaming engines like Unity we can calculate the expected position and create a simple animated object transition by lerping the Gizmo between two positions over a period of pre-determined amount of time.

particle.data.lerpFrom = particle.position;
particle.data.lerpTo = new THREE.Vector3(
Math.cos(particle.data.angle) * particle.data.distance,
0,
Math.sin(particle.data.angle) * particle.data.distance,
);

const animationLoop = new WHS.Loop((clock) => {
let i = clock.getElapsedTime() / 5;
particle.position = particle.data.lerpFrom.lerp(particle.data.lerpTo, i);
if (clock.getElapsedTime() > 1) {
animationLoop.stop(world);
particle.data.status = '';
}
});
animationLoop.start(world);

Model, View, Controller

Server setup

Node applications can run on most available servers with not much additional setup, unlike most AMP or EMP stacks. For this particular deploy we will be using DigitalOcean virtual private server (VPS), running Ubuntu 16.0. Since the server in question is a shared server, using Nginx for serving other domains, Node application will not be able to listen on port 80 and serve via express as it is already reserved by the nginx process which will serve client-side code. There are a few ways to mitigate this issue, but the simplest one is to use nginx proxy capabilities. Node will listen and broadcast over port 3000, and nginx will route all traffic through port 80 and URI subfolder.

location /server {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

It is possible to start node script directly by issuing node index.js from ssh, but once we exit the terminal session, node execution will stop. We can mitigate this by using a nohup parameter which allows for silent execution of the program with logging the shell output into a file. On the other hand, we can simply use pm2 node process manager which will ensure uptime even on server restart.

Event-Based Architecture

Since application is split into several different scripts (audio, WebSockets, server, world, Gizmos…) the easiest way of communication between all of them was to use native JavaScript CustomEvent functionality. Client’s actions emit app-wide events, which act as hooks for different parts of the ecosystem. By writing event handlers on both sides and propagating some of the events through the server, we have created a shared event network between all clients. A swarm of network events happens mostly during two key moments during the application execution — when any participant moves a Gizmo and when a new participant joins the room.

The Gizmo Move Event

In essence, the effective functionality of the system can be described as — to duplicate the world state on multiple clients, which in turn plays the same melody to all clients. The key part of this process is the possibility to simultaneously track a moving Gizmo across all clients. During the research multiple implementations have surfaced, but Salvador Dali’s StackOverflow answer about mouse movement on a 2D plane sums the message format up. On each mouse move, we are simply emitting an event status, and internal Gizmo data and position to the server, which finds all GS clients, and broadcasts the event to everyone. (Dali, 2014) Figure 10 visualises some key system events that execute when a player clicks and drags an active Gizmo.

Figure 10 — Plateaux communications model shown in a Huy Tr. style diagram.

Client-to-Client World Population

When a client first opens the WebSockets connection, they should not be allowed to start using the application until the server attaches them to a GS. At this point, server counts all active users and their GS, and sees if any GS has empty slots left. If no GS have empty slots, or this is the only client, a new GS is opened, and a GizmoArray object, full of predefined world data is sent to the player. On the other hand, if the client is joining an active room, they should start the world with the exact state that other clients have. This is why we implement a client-to-client “asking” mechanism. Basically, if this happens, server informs the new client that they should wait for worldstate via a simulated promise, and asks the first user from the same GS to export their current worldstate into a GizmoArray of the same format used for world population. This data is routed through the server, and to the pending client, and at this point they are allowed to join the room and listen to events. Figure 11 visualises communication between clients during world population.

Figure 11 — Plateaux World population events diagram with accent on client-to-client communication events, shown in a Huy Tr. style diagram.

Problem solving

Multi-conditional Event triggers

Since the whole of inter-client communication is event based, each event must be of a certain type, in order for clients and server to be able to route the information to the correct function. Since we are using native WebSockets, we do not have a built-in message type support. In Java and C# programming, there is a concept known as “Method Overloading”, which allows developers to define multiple methods with the same name, and the correct one will be chosen based on the type of input data. (JavatPoint, 2013) (Microsoft Inc, 2010) Similar to this concept, we attach a status property to each message object. All clients and the server have an event handler which switches the message status until it finds the correct route for the message. In practice, this means that with a few simple switches, we have implemented custom WebSockets message types, without using any libraries.

function eventHandler (data) {
let gizmo = gizmos.children[findGizmoKey(data.name)];

switch (data.status) {
case "gizmoHold":
gizmo.data.remotePickup(data);
break;
case "gizmoLerp":
gizmo.data.remoteLerpToOrbit(data);
break;
case "gizmoSleep":
gizmo.data.remotePutToSleep(data);
break;
case "gizmoWake":
gizmo.data.remoteWakeUp(data);
break;
case "populateWorld":
addToClientsInGs(1);
serverPopulate(data.gizmoArray);
break;
case "askForWorldState":
sendWorldState(data);
break;
case "waitForWorldState":
addToClientsInGs(data.clientsInGs - 1);
break;
case "clientJoined":
addToClientsInGs(1);
flashMessage('default', "A new performer has joined the room!");
break;
case "clientLeft":
addToClientsInGs(-1);
flashMessage('error', "A performer has left the room!");
break;
}
}

Group Separator Logic

So far, we have seen that server assigns clients to different rooms, so that it distributes the communication evenly. Since the application is not expected to have more than 10 Group Separators open at any time, we want to pack as much clients as possible together instead of doing load balancing.

At this point, we should understand that these virtual rooms do not really exist, as the only thing that defines them is an integer type property attached to each client. If we look at it in this light, it is easy and quite cheap to loop through all clients, build up an array that represents GS status, and compare array member’s lengths. In a few short steps, we are able to gather information on statuses of all GS, without doing any actual real-time tracking of them.

This principle is based on a “Chained Failover” type of load balancing as defined by Kemp Technologies, and it creates a single point of decision for load balancing, which is a very good option for later modifications — if any scaling needs arise in the future, we can simply change the GS assignment process at this point, without it affecting any other part of the application.(Kemp Technologies, 2017)

Mouse plane projection

Using a mouse to manipulate objects in three-dimensional space is not a built-in feature of Three.js simply because there are many ways of doing it, for different purposes. In Whitestorm.js on the other hand, we are provided with a few encapsulated tools that make the job easier to comprehend.

A common practice when using a mouse is to project a plane at a fixed distance from the camera which can be thought of as a screen plane, then create a Raycast line that starts from camera point and passes through the mouse point projected on that virtual screen plane. This principle is demonstrated in Figure 12 created in November 2011 by Marco Scabia of Adobe Inc.

Figure 12 — The perspective projection of a 3D object. (Scabia, 2011)

In our actual example, we are using Whitestorm’s .project() method interface, in combination with Three.JS base .copy() method for converting Vector3 into a valid position. One discrepancy from the standard model is that .project() method is expecting a Plane object as an argument. Instead of doing a very expensive operation of projecting a plate onto a moving object within a globally rotating object group, we are forcing the application to always create a new plane based on the zero-point vector (Vector3(0, 0, 0)) and perpendicular to the camera ray. As a side effect, this allows the Gizmos to always enter the Planet mesh when moved across it, and disappear from sight.

case "isHold":
particle.position.copy(mouse.project());
particle.data.position = particle.position;
socketEmit('gizmoHold', particle.data);
break;

GUID Generation

Universal Unique Identifier (UUID or GUID) are a hot topic in JavaScript development. The RFC4122 UUID standard requires the use of underlying Operating System RNG, which cannot be accessed from the browser. There are lots of libraries that try to solve this issue by implementing various ActiveX controllers or using Math.random(). And the best of them use the newly available RNG APIs which judging by Robert Kieffer’s calculations allow for 3.26x1015 version 4 RFC4122 compatible UUIDs. Which is approximately one in a million chance of a collision. (Kieffer, 2015) For Plateaux, using an algorithm like this will put a strain on the already busy server, and with practically no benefit.

Instead, we can use a far simpler function written by Jon Surell, which uses eight simple Math.random() transformations of a five digit hexadecimalnumber which is rounded and floored in the process. (Surrell, 2016)

function guid() {
function s4() {
return Math.floor((1 + Math.random()) * 0x10000).toString(16).substring(1);
}
return s4() + s4() + '-' + s4() + '-' + s4() + '-' + s4() + '-' + s4() + s4() + s4();
}

Discussions and Results

Relational Cybernetic Art

Contemporary Art is redefining the question of what it means to live in the state (condition) of contemporary world. Nicolas Bourriaud’s theory of Relational aesthetics is acknowledged as one of the first steps in defining the framework and theory of contemporary Art. Bourriaud recognises the problem of ArtCriticism which is still relying on art theories from the 60s and 70s while contemporary art practices are often misunderstood due to lack of contemporary framework. That is why he proposes a theoretical paradigm that takes the legacy of conceptual art but doesn’t propose death of any kind, nor is proposing an utopian project such as modernism. Instead, theory of Relational Art accepts the world as it is and embraces it trying to participate in it in any way possible by altering small particles of time and space. This theoretical concept is based on the Marxist concept of migration from the production of goods, to production of services, in Bouirrard’s own words “art is a state of encounter” and exhibition is “arena of exchange”, thus the worth of the artwork lies in the relations and connections it creates, not on the monetary value. (Bourriaud, 1998, pp. 17–18) This redintegration of a micro utopian environment and alternative ways of living is the basis for this theoretical paradigm of exchange within a community. At the core of this aesthetical concept is a dialogue between an author who in words of Roland Gérard Barthes “renounced their own death” and the viewer as an additional co-authorof the work of art. (Barthes, 1984)

Artistic projects in relational aesthetics are open and allow for the opportunity of being modified and completed by the viewer. According to Bourriaud, relational aesthetic is defined as a theoretical framework for legitimization of non-material and completely materialistic art. (Denegri, 2006, p. 400)

Relying on Nicholas Bourriaud’s theory of relational aesthetic in this context, Plateaux project is a project of relational art, as it requires participatory practice in which the visitor rounds off and completes the work and in doing so, becomes a participant. In its nature, the Plateaux as a project cannot exist without visitors which use it. Additionally, the project literally presents a platform which legitimises and legalises non-material and completely materialistic art which is constantly being defined through a dialogue between the work of art and its audience, audience and audience etc. Plateaux does not present an utopian project which aims to change the world, but a niche in which visitors will be able to create bonds based on music. Creating something out of nothing, and leaving temporary traces in temporary spaces. Similarly, the example of Moon project by Olafur Eliasson and Ai Weiwei, in the research we have seen that Plateaux is a part of a larger community that believes in creative collaboration groupware, and invites for an open expression that transcends geopolitical boundaries by insisting on creative dialogue between participants and the project — which is exactly what makes it relational in nature.

The artistic nature of this project spans industries and touches on the topic of creative ownership. Since website users will be put into a position of co-artists without them realizing it, result of this collaboration questionably falls under “cybernetic art” or “computer music” as defined by Miško Šuvaković in 1999, both of which stem from connecting and integrating humans and computers in an artistic workflow. (Šuvaković, 1999)

In a wider sense, “cybernetic music” is all music in which a special program generates sound matrices in form of partitur or visual storage, which are then interpreted by performers, or even composition that are created by and performed by computers in their entirety. The specific case of Plateaux application might be viewed in this light as a work of cybernetic art where individual sound bites are created and predetermined by a human musician, but are composed into matrices by multiple co-authors, and performed by a computer program. Similar to the art of Japanese “idoru” Kyoko Date created by HoriPro Inc in 1996. (Nenić, 2006) In a more direct manner related to Plateaux, Šuvaković defines another nature to computer music where contemporary and experimental genres of classic and popular music in which computer, along with the composer and performers, participates in the process of composing and performing the music. Multiple experimental forms where computer acts as an “author” or “realisator” of music work of art hint to a new field of cybernetic art understood as an area of activity which is conceived by “relation and integration of man and machine in the creation process”.

Death of the Author & Artistic Ownership

Similarities with chapter 8 of Eco’s The Open Work go further than the name “Death of the Author”. Eco describes an idea of an “open work” as open in terms of interpretation that different viewers from different cultural contexts and their different experiences attach to the work. Following this train of thought leads us to a conclusion that an open work amounts to an act of improvised creation because the participant or performer is not entirely free to interpret composer’s instructions, but “must impose their judgment on the form of the piece”. (Eco, 1989, p. 1)

In the case of Klavierstück XI by Karlheinz Stockhausen, performer evaluates a large set of partitur and note groupings on a single sheet, which act similarly to how Plateaux employs sound segments — the performer chooses among them, combines them and creates a new narrative every time they perform. To put things further into perspective, Eco mentions similar works like Pierre Boulez’s Third Sonata for Piano, where the performer is presented with a very similar system of note groupings, but with some predefined rules about order and non-permissible permutations defined by the author. In the chapter Poetics of the Open Work, Eco also sees this as an open work, as it does not offer itself as finite, but is brought to a conclusion at the time of experiencing the performance, by the performers themselves. In Plateaux there are additional rules, but those rules are imposed on the work by the multitude of players in the GS, instead of being pre-defined by the author of the work. Performers seem to be free to mix and loop audio samples, but in being forced to collaborate with others who will also make real-time decisions and influence the output, each instance of the performance will define slightly different rules influenced by the mood, music taste or differences between the performers on an aesthetic plane.

Eco highlights that an open work becomes a playground for research and verification. “Every performance offers us a complete and satisfying version of the work, but at the same time makes it incomplete for us, because it cannot simultaneously give all the other artistic solutions which work may admit.”(Eco, 1989, p. 33)

Eco believes that although the participation of the performer changed the work itself, it is still being recreated in the same field of relations as set by the initial author, thus authorship over the work is not lost. Author had offered the visitor with a series of rationally organised possibilities, as a work that ought to be finished. (Eco, 1989, p. 36) Through this prism, Eco sees the open work as a system in which the author, visitor, performer and the medium itself are intertwined segments of the whole. On the other hand, we have seen that in the exact same system Eco has described as the open work, Šuvaković interprets both the computer and the performers as becoming authors in equal parts, as they are required to impose a manifestation of their nature onto the work. This vision is more in agreement with Nicolas Bourriaud theory of relational aesthetic which admits that since the work is unfinished, it requires a participatory practice through which the performer or visitor becomes a participant in the creation process which shows another non-persistent nature to the open work.

In the end, speaking in technical terms, if the whole project is to be released as open-source under MIT licence, which allows for reuse with modification where anyone may join the development discussion, add or remove content and even change the logic of the application, the authorship lines seem to be even more blurred than Eco, Šuvaković and Bourriaud could imagine.

Future Prospects

The Plateaux project is released publicly on GitHub, with a goal to stir conversation, create learning opportunities and start an open discussion between developers and artists. Undoubtedly there are questions left unanswered and initiatives left unfinished, as the readme.md file states, mobile version done as a Cordova application based on the same frameworks could be the next logical and truly innovative evolutionary step for this application, which could bring the project to a wider audience, in the engineering sense, there is always room for performance improvements and micro optimizations. As for the design, of the application itself, the domain language created during development is well divided into modules, and allows for removal or changing individual objects without affecting the logic of the application. This means that with a little effort, a developer who specialises in three dimensional visualizations in the browser environment, could base their further research or project forks on this project.

In her research paper, “Measuring music-induced emotion: A comparison of emotion models, personality biases, and intensity of experiences”, Jonna K. Vuoskoski solidifies the idea that music correlates with the listener’s and artist’s mood. Music can influence the mood, but it can move in the opposite direction where mood influences the choice of music, and in case of performer or an author, it influences the creative output itself. Plateaux seems like a good starting point for further research on psychological aspects of sharing authorship over a musical art form in this sense. Usability testing could be done as to create profiles and research based around the idea to make audio and three-dimensional usage easier and more rewarding for the end-user, which will in turn help popularise use of Web Audio API and WebGL technologies.

Conclusions

David Bell defines Cyberspace as an imaginary space that exists between computational devices, digital services, new media technologies and simulations/animations of all kinds. (Bell, 2007, pp. 2–14) On the other hand, all these types can be seen as segments, different faces of the same thing, they are what Cyberspace consists of, but they are not what Cyberspace is defined by. David sources this concept from William Gibson’s science fiction classic “Neuromancer” which coined many key terms in modern computing and popular culture.

Speaking in terms of a more modern adaptation, the definition coined by Bell and Gibson could be updated with Bourriaud’s concept of a non-persistent open work of art. We have shown that imagining an innovative three-dimensional experience on the web means moulding the medium into a space of meeting, a plateau. It can be concluded that Cyberspace if in fact an imaginary space that exists between participants in a virtual environment.

This paper has shown that the process of creating an innovative experience in today’s web development environment greatly depends on being able to differentiate and compare vendor solutions or programming concepts in multiple spheres. We have seen that including these abstracted, open-sourced vendor solutions can offer complexity and performance benefits far beyond the powers of a single developer, but have also observed that developers shouldcarefully and objectively find balance between technologies and be wary of overusing or vendor-locking themselves when they find that there are little to no benefits.

The research performed through different phases of this paper touches on a multitude of topics, provides insight into common technical problems, provides solutions and possible best practices for some of them, and in doing so serves as a starting point for future research into the topic of multi-user interaction in three dimensional manifestations of cyberspace on the web. By promoting some of the technologies, and demoting others, gives an informed and objective action plan and opens the discussion for other developers and researchers but also makes future development of similar projects more accessible to other developers.

Through the process of performing the necessary comparisons and building the application, the paper has shown that technologies like WebSockets and WebGL, while not yet mature in terms of their respective ecosystems of vendor abstractions for common operations, do allow for creation of innovative Cyberspaces on the web, and are in fact what the future of web development and interaction between users will look like and build upon.

Bibliography

  1. annyang! (2015) SpeechRecognition that just works. Available at:https://www.talater.com/annyang/ (Accessed: 30 June 2017).
  2. Anonymous (2015) Create a custom ambient sound mix. Available at: https://www.reddit.com/r/InternetIsBeautiful/comments/2j4rom/create_a_custom_ambient_sound_mix_rain_thunder/cl8got6/ (Accessed: 30 June 2017).
  3. Šuvaković, M. (1999) Pojmovnik moderne i postmoderne likovne umetnosti i teorije posle 1950 godine. 1st edn. Novi Sad: SANU i Prometej.
  4. Bardini, T. (2002) Review: Bootstrapping: Douglas Engelbart, Coevolution and the Origins of Personal Computing. Leonardo.
  5. Barthes, R. G. (1984) The Death of the Author. s.l.:s.n.
  6. Bartle, R. (2003) Designing Virtual Worlds. 1st edn. San Fancisco: Peachpit.
  7. Bell, D. (2007) Cyberculture Theorists: Manuel Castells and Donna Haraway. 1stedn. Abingdon(Oxon): Routledge.
  8. Bidelman, E. (2017) HTML5 audio and the Web Audio API are BFFs!. Available at: https://developers.google.com/web/updates/2012/02/HTML5-audio-and-the-Web-Audio-API-are-BFFs (Accessed: 30 June 2017).
  9. Bourriaud, N. (1998) Relational Aesthetics. Dijon: Presses du Réel.
  10. Burge, S. (2015) Love it or Hate it, Bootstrap is Winning the Web. Available at: https://www.ostraining.com/blog/coding/bootstrap-winning/ (Accessed: 01 May 2017).
  11. Cabello, R. (2013) mrdoob/three.js. Available at: https://github.com/mrdoob/three.js/ (Accessed: 30 July 2017).
  12. Can I use (2017) Support tables for Web Audio API. Available at: http://caniuse.com/#feat=audio-api (Accessed 30 June 2017).
  13. Can I use (2017) Web Sockets. Available at: http://caniuse.com/#feat=websockets (Accessed: 30 June 2017).
  14. Capan, T. (2013) Why The Hell Would I Use Node.js? A Case-by-Case Tutorial.Available at: https://www.toptal.com/nodejs/why-the-hell-would-i-use-node-js(Accessed: 27 June 2017).
  15. Dali, S. (2014) node.js — display mouse pointer movement in other client computers using socket.io. Available at: https://stackoverflow.com/a/24600291/2387266 (Accessed: 30 June 2017).
  16. DeBill, E. (2017) Modulecounts. Available at: http://www.modulecounts.com/(Accessed: 30 June 2017).
  17. Denegri, J. (2006) Umetnička kritika u drugoj polovini XX veka. Novi Sad: Svetovi.
  18. Doug Engelbart Institute (2017) Doug’s 1968 Demo. Available at: http://www.dougengelbart.org/firsts/dougs-1968-demo.html (Accessed: 11 June 2017).
  19. Eck, D. (2016) Welcome to Magenta!. Available at: https://magenta.tensorflow.org/welcome-to-magenta (Accessed: 01 June 2017).
  20. Eco, U. (1989) The Open Work. Cambridge(Massachusetts): Harvard University Press.
  21. Ellis, D. (2015). All websites look the same. Available at: http://www.novolume.co.uk/blog/all-websites-look-the-same/ (Accessed: 01 June 2017).
  22. EmacsWiki (2014) CollaborativeEditing. Available at: https://www.emacswiki.org/emacs/CollaborativeEditing (Accessed: 11 June 2017).
  23. Evans, D. (2015) Flash Will Soon Be Obsolete: It’s Time for Agencies to Adapt.Available at: http://adage.com/article/digitalnext/flash-obsolete-time-agencies-adapt/298946/ (Accessed: 30 June 2017).
  24. Facebook Inc (2017) Videos. Available at: https://developers.facebook.com/videos/?category=f8_2017 (Accessed: 30 June 2017).
  25. Fairpixels (2016) Free Logo Designs For Your Startup. Available at: http://www.logodust.com/ (Accessed 30 June 2017).
  26. Google Inc (2012) SPDY: An experimental protocol for a faster web. Available at: http://dev.chromium.org/spdy/spdy-whitepaper (Accessed: 28 June 2017).
  27. Google Inc (2017) Chrome Music Lab. Available at: https://musiclab.chromeexperiments.com/ (Accessed 30 June 2017).
  28. Internet Engineering Task Force (2011) The WebSocket Protocol. Available at: https://tools.ietf.org/html/rfc6455 (Accessed: 30 June 2017).
  29. JavatPoint (2013) Difference between method overloading and method overriding in java. Available at: https://www.javatpoint.com/method-overloading-vs-method-overriding-in-java (Accessed 30 June 2017).
  30. JSON Organization (1999) JSON. Available at: http://www.json.org/ (Accessed: 29 June 2017).
  31. Kamp, D. (2012) Sound Creatures X. Available at: http://pictoplasma.sound-creatures.com/ (Accessed 10 June 2017).
  32. Kang, J. (2011) What is an API? Available at: https://www.quora.com/What-is-an-API-4 (Accessed: 30 June 2017)
  33. Kemp Technologies (2017) Load Balancing Techniques. Available at:https://kemptechnologies.com/load-balancer/load-balancing-algorithms-techniques/ (Accessed: 30 June 2017).
  34. Kieffer, R. (2015) Create GUID / UUID in JavaScript?. Available at: https://stackoverflow.com/a/2117523/2387266 (Accessed: 22 June 2017).
  35. Kleveros, D. (2015) 600k concurrent websocket connections on AWS using Node.js. Available at: https://blog.jayway.com/2015/04/13/600k-concurrent-websocket-connections-on-aws-using-node-js/ (Accessed: 13 June 2017).
  36. Lotus Development Corporation (1995) Groupware: Communication, Collaboration and Coordination. Available at: http://www.intranetjournal.com/faq/lotusbible.html (Accessed: 30 June 2017).
  37. Lung, C. (2008) Tutorial: Building a Flash socket server with Java in five minutes. Available at: http://www.giantflyingsaucer.com/blog/?p=205(Accessed: 29 June 2017).
  38. Martin, R. C. (2008) Clean Code: A Handbook of Agile Software Craftsmanship. 1st edn. Upper Saddle River(New Jersey): Prentice Hall.
  39. Mashable Inc. (2015) Web design is dead. Available at: http://mashable.com/2015/07/06/why-web-design-dead/#lcmCtMrrbgqo(Accessed: 22 May 2017).
  40. McFarlin, T. (2015) What Is the Vendor Directory?. Available at: https://tommcfarlin.com/the-vendor-directory/ (Accessed: 26 June 2017).
  41. Microsoft Inc (2010) Member Overloading in C#. Available at: https://msdn.microsoft.com/library/ms229029(v=vs.100).aspx (Accessed: 30 June 2017).
  42. Microsoft Inc (2017) SocketAsyncEventArgs Class. Available at: https://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.aspx (Accessed: 30 June 2017).
  43. Mozilla Developer Network (2016) Building up a basic demo with Whitestorm.js. Available at: https://developer.mozilla.org/en-US/docs/Games/Techniques/3D_on_the_web/Building_up_a_basic_demo_with_Whitestorm.js (Accessed: 30 June 2017).
  44. Mozilla Developer Network (2017) WebGL — Web APIs. Available at: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API (Accessed: 20 June 2017).
  45. Mozilla Developer Network (2017) WebSockets. Available at: https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API(Accessed: 29 May 2017).
  46. Nenić, I. (2006) Tehnologija i zvuk: Od digitalizacije muzike ka muzikalnosti digitalnog. E-volucija, Issue 12.
  47. NPM (2017) What is NPM?. Available at: https://docs.npmjs.com/getting-started/what-is-npm (Accessed: 29 June 2017).
  48. O’Connor, S. (2015) ‘Is web design a dying profession?’ — 1 Year On. Available at: https://teamtreehouse.com/community/is-web-design-a-dying-profession-1-year-on (Accessed: 10 June 2017).
  49. Palacin, M. et al. (2013) The Impact of Content Delivery Networks on the Internet Ecosystem. Journal of Information Policy, Volume 3, pp. 304–330.
  50. Pocztarski, R. (2016) rsp/node-websocket-vs-socket.io. Available at: https://github.com/rsp/node-websocket-vs-socket.io (Accessed: 20 June 2017).
  51. Pratas, A. (2015) Every Website Looks the Same, and That’s Ok. Available at: https://www.webdesignerdepot.com/2015/10/every-website-looks-the-same-and-thats-ok/ (Accessed: 24 May 2017).
  52. Purdue University (2011) General Introduction to the Postmodern. Available at: https://www.cla.purdue.edu/english/theory/postmodernism/modules/introduction.html (Accessed: 20 May 2017).
  53. Rang, A. (1990) TinyTalk 1.0 is now available for anonymous FTP. Available at: https://groups.google.com/forum/#!msg/alt.mud/4ChcSb_Ri2g/svu-P6s5fM0J(Accessed: 30 June 2017).
  54. Ratchet (2016) What is a WebSocket? Available at: http://socketo.me/docs/(Accessed: 28 June 2017).
  55. Roth, G. (2011) xSocket. Available at: http://xsocket.org/ (Accessed: 22 June 2017).
  56. Sagor, R. (2000) Guiding School Improvement with Action Research. Alexandria(Virginia): Association for Supervision and Curriculum Development.
  57. Scabia, M. (2011) Working with Stage3D and perspective projection. Available at: http://www.adobe.com/devnet/flashplayer/articles/perspective-projection.html (Accessed: 30 June 2017).
  58. Schema.org (2011) Home.Available at: http://schema.org/ (Accessed: 16 June 2017).
  59. Serota, R. (2016) rserota/wad. Available at: https://github.com/rserota/wad(Accessed: 30 June 2017).
  60. Simpson, J. (2017) Spatial Audio. Available at: https://howlerjs.com/assets/howler.js/examples/3d/ (Accessed: 30 June 2017).
  61. Socket.IO (2012) socketio/socket.io-client. Available at: https://github.com/socketio/socket.io-client (Accessed: 30 June 2017).
  62. Socket.IO (2017) Socket.IO. Available at: https://socket.io/ (Accessed: 30 June 2017).
  63. Stangvik, E. O. (2011) websockets/ws. Available at: https://github.com/websockets/ (Accessed: 30 June 2017).
  64. Stangvik, E. O. (2016) websocket client benchmark. Available at: http://websockets.github.io/ws/benchmarks.html (Accessed: 13 June 2017).
  65. Stemkoski, L. (2013) Mouseover (Three.js). Available at: https://stemkoski.github.io/Three.js/Mouse-Over.html (Accessed: 23 June 2017).
  66. Studio Puckey (2013) Radio Garden. Available at: http://radio.garden/live/(Accessed: 30 June 2017).
  67. Surrell, J. (2016) Create GUID / UUID in JavaScript?. Available at: https://stackoverflow.com/a/105074/2387266 (Accessed: 22 June 2017).
  68. The Internet Engineering Task Force (1971) Request for Comment 147: The Definition of a Socket. Available at: https://tools.ietf.org/html/rfc147 (Accessed: 30 June 2017).
  69. The University of Utah (2008) Problem Solving. Available at: http://www.cs.utah.edu/~germain/PPS/Topics/problem_solving.html(Accessed: 19 May 2017).
  70. Three.js (2017) Javascript 3D library. Available at: https://threejs.org/(Accessed: 30 June 2017).
  71. Tiigi, T. (2015) tonistiigi/audiosprite. Available at: https://github.com/tonistiigi/audiosprite (Accessed: 29 June 2017).
  72. Tr., H. (2015) huytd/agar.io-clone. Available at: https://github.com/huytd/agar.io-clone/wiki/Game-Architecture (Accessed: 30 June 2017).
  73. Transloadit (2017) Generate a waveform image from an audio file. Available at: https://transloadit.com/demos/audio-encoding/generate-a-waveform-image-from-an-audio-file (Accessed: 30 June 2017).
  74. Troubat, A.-C. (2013) The Musical Time Machine. Available at: http://radiooooo.com/ (Accessed: 01 June 2017).
  75. Twitter (2011) Bootstrap 1.0.0. Documentation. Available at: http://bootstrapdocs.com/v1.0.0/docs/ (Accessed: 20 June 2017).
  76. Unboring (2015) Inspirit. Available at: http://inspirit.unboring.net/ (Accessed: 29 May 2017).
  77. University of Florida (2005) The MUD. Available at: http://iml.jou.ufl.edu/projects/Spring05/Hill/mud.html (Accessed: 22 June 2017).
  78. WebComponents (2017) Home. Available at: https://www.webcomponents.org/ (Accessed: 11 June 2017).
  79. Whitestorm.js (2017) Examples/design/saturn. Available at: https://whs-dev.surge.sh/examples/?design/saturn (Accessed: 30 June 2017).
  80. Wikimedia Foundation Inc (2017) List of collaborative software. Available at: https://en.wikipedia.org/wiki/List_of_collaborative_software (Accessed: 30 June 2017).
  81. World Wide Web Consortium (2015) Web Audio API. Available at: https://www.w3.org/TR/webaudio/ (Accessed: 30 May 2017).

Homullus

I explain how i did stuff, and you (hopefully) give me your input. Expect mostly code, design and travel.

Marko Mitranić

Written by

Full-Stack developer & University Lecturer

Homullus

Homullus

I explain how i did stuff, and you (hopefully) give me your input. Expect mostly code, design and travel.