School for Poetic Computation — Day 1

Agustin R Anzorena
Sfpc
Published in
3 min readOct 10, 2016

Very very exciting…!!

The first day was an introduction by Taeyoon Choi to how the School works, the ideals behind it, how to move around and take good care of the people, the installations and the knowledge (that is: spread it around..!). Students from all over the world, and good vibes..!! We all got to write down our ideas for learning and making at the School in big papers to be hanged in the walls..!!

What to Learn/Make/Teach

What to Learn/Make/Teach

Very nice detail: The School is actually situated in the same building that Bell Labs NYC was situated a “couple” of years ago (between 1898 and 1966). Bell Labs..!! Quite a few incredible scientArtist investigated here..!! And I’m excited to take a deep look at Michael Noll’s work.

So… first workshop ever in NYC’s SFPC:

(Thanx to Mushon Zer-Aviv, ;) )

After walking a little bit around the building, we came back to discover a man standing inside a cardBoard box, hands pressed against his chest, with a paper that read out something like this: “I’m a Robot, with only a speech recognition package preInstalled. Make me exit this room”.
The only hint we had before was “(high level) Programmers will have a hard time at this exercise”.
And we kinda did. At first, everything that we tried to command to the robot was bounced with a “command not recognized”. It took the whole group 20 minutes or so until we realized we were trying to speak to the robot in a very abstract, high level way, and what we needed to do was something like “install bone and muscle packages”, then teach it to activate muscle in the legs, bend knees, balance the hip, wrap all that and define a function for it, repeat it, and build more functions like that.
I proposed to build a Chicken-Dance procedure, just for the fun of it, but it might have taken too long.. haha!!

Things that programmers and non-programmers learnt from this exercise:

  • Layers of software: from the transistors to the nice GUIs everywhere, everything builds on layers, that somewhat abstractly, work with each other to create more “complex” layers of software.
  • People create tools upon these layers so that a lot of things become easier to do, and we don’t have to deal with knowing how everything deeply works on a computer.
  • Sometimes we need to change the “branch” we are in, therefore we might need to learn how to go down a level to come back up again.

Interesting issues:

  • An interesting issue on how to access low levels in our brain came up. What kind of methods or techniques would allows us to do that? Is it even possible? How close to controlling synapses can we get?
  • Another interesting question: What would we like to simply have uploaded to our brains? Matrix-styled, is there a type of knowledge that we consider should not go through the process of learning?
  • What are the difference between a machine trying to learn (let’s say, with various machine-learning algorithms) and a human? Thinking about it, one cannot simply just learn, right? The process of learning something involves the experiences, capabilities, emotions, present states, and a lot more about the person. In essence, the information gets appropriated and tinted with our own way of being and personality. Two people reading the same line off a text book will not perceive it’s “informational content” the same way. And this is also why people need different times to learn stuff.
    So how would “just uploading information” to our brain work after this? Can data be stored neutrally to us? Our brain storage and “linking” does not function like a computer does…

What do you think about it?

--

--