What designers need to know about machine learning
Emma Rosenberg
572

I was recently at this 2 day conference at ANU Canberra http://www.braveconversations.org/speakers/

It was ostensibly about discussing what’s wrong with the Internet / www and AI and machine learning was a big part of the discussions. The room of 80 or so from all walks of life and ages was very evenly divided into two camps. Those who loved it and look forward to it and those who feared the potential yet inevitable future. Some were actually terrified.

Amongst the speakers was Dame Wendy Hall and Dr Susan Halford who both work with Tim Berners-lee at the Web Science Trust in the UK. It’s public knowledge they’ve been working for a while on a semantic web. That is a machine that searches semantic meaning for humans. Wendy said his other great dream is to re-decentralise the Internet.

I think I’ve done both in Wyrdom which is a set of autonomous algorithms that allows everyone on the planet to engage using semantics to bring them together. It is not an algorithm to artificially find semantic meaning. It is a set of algorithms that lets humans determine semantic meaning together and it’s expandable enough to allow every human in the world to input continuously overcoming a main reason Tim is looking for a machine to do it.

Design wise it required a binary modular method for people to join together online using matching subject lines each party fills into their own modules. (Autonomy means the groups have to be moderated by consensus so that’s another algorithm I wont go into here)

The platform invites them to join after recognising the same or similar words, images, symbols and/or numbers. The modules allow them to find each other publicly but maintain secure privacy once joined and allows all human anywhere to join the resulting semantic group.

This required another algorithm that artificially re-decentralises the internet by using an infinite 3D virtual realm, to represent the hardware part, and modules, to represent and decentralise our nodes thereby allowing many new webs in the virtual internet not just the one www as in the current real internet (That intersection between one hardware and many nodes causes hierarchical access and this platform needed autonomous access).

This in turn required another algorithm that could manage the resultant multiple 3D module webs created without using any reference at all to the module content or the semantics which have to remain autonomous to the users. The 3D number makes them ‘locatable’ to others after they’ve been placed by the users semantically.

This also means the locata-base managing the modules doesn’t actually contain any content as that can all be remotely stored behind anonymous hyperlinks in scattered servers. Design wise this in turn required a 3D number that is infinitely expandable but can still be contained within a set size limit of a UX screen. Introducing, xyz@n that is not just a hyperlink but also a virtual internet running millions of www’s of modules.

This in turn means soon as we get funding we’ll start designing 3d UX modular interfaces and create open source for it. Being a modular design, all the users can add their own interfaces regardless of the content because semantically one ‘document’ might have many different users using it for different purposes, (supply chain with manufacturer, retailer, consumer and blogger for example) so design wise it has to be an interface shop as big as an app store as that what is is indeed.

Design is just the most fascinating thing.!

Cheers.