Insights, Experiences and Thoughts + some military focused ideas
Last week I attended the International Telecommunications Union (ITU) AI for Good Global Summit in Geneva which was a gathering to celebrate the advances of human possibility through technology with a focus on artificial intelligence. Scientists, sociologists, politicians, diplomats and artists from all corners of the world came to showcase their work. The agenda was superbly curated covering all aspects from wellness, to agriculture, to economics, to humanitarian advances and much more. The tracks were all overlapping so while there were four days, the content was around four work weeks worth. This post represents the small fraction I was able to experience and also includes some of my thoughts on the military use-cases of the technologies I saw and experienced.
Mind Over Machine
I was very much looking forward to trying out the Think and Zoom demo which is a brain computer interface device that magnifies what is in sight just by using the mind. Zuby Onwuta created this technology as “Brain Control for Blind Tech” which was inspired by his own experience as a user of glasses. I was particularly eager to try it out as I wanted to experience for myself what it would be like as this technology could be very useful for soldiers on the ground as well as fighter pilots. I asked him if it was possible to add infrared capability to his technology and he said it was.
[Demo] I got to demo two uses of his technology, one which was the primary “Think and Zoom” and the other which was to create digital movement through thought. For the first he asked me to look at the screen of his phone which was projecting what was behind the phone (a poster) however in the phone screen it was blurry and I couldn't see the poster clearly and he told me to just focus on the blurry image on the screen and make it clear. I had a hard time with this and was not able to make the poster clear. A friend who was with me was able to do it by not looking at the blurry screen and instead by looking at the actual poster.
In this picture I am concentrating on a screen with a digital bird in a cage which I am supposed to get out of the cage by thinking. I felt I was able to do it easier when I closed my eyes. Then when I did it a second and third time around I was able to do it without closing my mind. Zuby (on the right) was trying to distract me by asking me questions. Answering his questions pulled me away from the cognitive effort and concentration I had on getting the bird out of the cage and when he did that the bird would drop slowly back into the cage and I would have to get it back out. This made me think of the impracticality of having to rely on focused brain concentration with a brain computer interface device during combat as there are many other cognitive tasks that are happening. But! I thought about how this could be brain trained for it to become part of a form of cognitive muscle memory. To give a hypothetical futuristic scenario: A fighter pilot could be controlling his/her forward formation of drones and swarms with his/her mind and human-machine teaming with a constellation of autonomous ISR support leveraging sensors, the electromagnetic spectrum and space. However this still doesn't answer the question of concentration — neuroplasticity and muscle memory mean that humans have the capacity to make certain things “natural”. It was very convenient that I met the world’s best Beat-boxer ReepsOne whose brain has been studied and provides for some useful cues. At the end of the next section about the artistic intelligence performance I expand on ReepsOne.
Hyper Specialism opens the door for Human Augmentation
The performance “AI Pushing the Limits of Artistic Intelligence” took place in the gorgeous Human Rights and Alliance of Civilizations Room, used by the United Nations Human Rights Council at the UN HQ in Geneva. Curated in partnership with the Berlin based State Studio, the dramatic lighting placed on the ceiling of the room added a tremendous amount of drama, intensity and intimacy to the experience, we were in a cocoon of space wrapped with lighting, sounds and visual experiences which forced us to challenge our understanding of what our bodies and minds can do and how we can augment very fundamental aspects of our bodies (in this case: muscles and voice were the center piece).
We started the evening with BBC presenter LJ Rich, who is also a singer songwriter that has synaethesia which is “a perceptual phenomenon in which stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway”. It was incredible to hear her talk about how her mind experiences sounds in conjunction with taste and smell. (FYI: she recommended Bach as a perfect pair to green tea).
We witnessed the four time world champion RoboZee, perform his robot dance. I never knew that its been 40 years since humans first started dancing like robots, its actually an incredibly hard thing to do as you are meant to pop every muscle of your body independently from the rest of your body while moving like a robot. [I will post the performance video here once it is out].
Before meeting Christian “Mio” Loclair I heard about his studio Waltz Binaire and his latest work Narciss which is an AI looking into a mirror and thinking about itself. Mio is a computer scientist (also was a robot dancer) who currently focuses on what I perceive to be philosophical and reflective artistic initiatives around humanity and technology. His part of the performance was a reflection on our relationship with technology and took us on a journey from Narciss, to ourselves and the journey we take with technology from amazement to critical curiosity to depression and back.
In the cocoon of the Human Rights Room I was absorbed and glued to each performance. ReepsOne pushed my mental bounds of how I could use my voice, larynx, lungs, diaphragm and mouth to augment the way I experience my body, extend my sense of self and identity. He has the fastest recorded diaphragm in the world which makes him an incredibly impressive beat-boxer.
His sound sculptures made me rethink ways in which we can express our selves, as I was watching him beat box and create a digital sculpture of his sound which he controlled and shaped in real time. He showed us “ReepsOne Does Not Exist” the worlds first Virtual Reality music video made with gyroscopic 3D sound which is incredible to experience in VR where the viewer is dynamically teleported between acoustic experiences and atmospheres that have sonic accuracy, combining abstract environments with surrealist elements and a haunting, vocal composition. Must see!
His beat-boxing attracted the attention of University of London Professor Carolyn Mcgettigan who is the Director of the Vocal Communication Laboratory and whose research focuses on “understanding the behavioral and neural processes involved in vocal communication”. She wanted to see what was going on in Reeps’s brain when he beat-boxed and she did some MRI scans on his brain and repeated the process with someone who had just learned to beat-box. The result was that compared to ReepsOne, a novice beat-boxer activates not just the same motor cortex and sensory areas of the brain, but additional areas for listening and planning movements. The conclusion was that Reeps is an expert and “has motor memory for making the very complex sounds of beat-boxing and exhibits fine control of the articulators and breathing in his performance which is the result of years of training and practice”. This led Professor McGettigan to conclude that “being an expert doesn't always mean activating more brain areas. When a motor skill becomes well learned, the expert shows more focused activation”.
This made me think about the military context and my hypothetical future scenario of the use-case of brain computer interface in the military operaions and the challenges that it would pose if focused concentration was needed. The findings of Professor McGettigan’s work would suggest that motor memory in experts becomes something automated and part of them, there is enough grounds to hypothesize that this would be the case for someone who trains their brain to control drones. The next question would be how long would it take to get to an acceptable combat-readiness level? Perhaps neuro-modulation technologies (ex: Halo) that engage the motor cortex by inducing hyper-plasticity so the brain can learn faster could be a potential solution….
Full Body Surrogacy for Collaborative Communication
There are researchers who are working on augmenting our bodies beyond our spatial limits. In general we are very agile when it comes to expanding our physical limits of our body, we do it on a daily basis, for example when we hold a pen to write the pen becomes an extension of our body, same with a hammer and a car. There have been interesting studies such as the rubber hand illusion where our mind can be tricked to believing a rubber hand is ours when it is not. Assistant Professor Dr. Yamen Saraiji from Keio University in Japan brought his Fusion robotic arms which is a set of robotic arms with a camera that you put on as a backpack.
The backpack I am wearing has the robotic arms which are extended in front of me and peeping above my right shoulder is the camera which allows the person sitting down in VR on the right hand of the picture to see what I am doing. In the screen above her you can see what she sees. In her hands she has controllers which she uses to control my robotic arms. Together we had to cooperate to put a ball in a cup. It was a surreal experience to have (1) extra limbs on me (2) have someone else control something connected to my body. At the same time it felt very natural to collaborate with another human towards a common goal and to have new limbs to leverage.
In terms of the potential military use-case, this backpack can be controlled remotely as it has Internet connectivity, it weighs 10kg/22lbs which is probably too heavy if it was meant to be worn in conjunction with a ~20kg/40lbs backpack. However, there could potentially be other uses for this in terms of having the arms not be “hands” instead be something that would be useful in the operational environment (ex: something that could help get through jungle terrain, or be a shield in an armed urban combat scenario).
From Zero → to Music Composer in Minutes
I found a table with a welcoming message inviting everyone to come and compose a song with AI. There I met the super friendly Dr. Maya Ackerman who is an Assistant Professor at Santa Clara University teaching Artificial Intelligence and Machine Learning. She is also the founder of Alysia which is an app that is democratizing songwriting with artificial intelligence. It is true, you can do it in minutes! And that is how I composed my first song which I wrote about the AI for Good Summit, you can check it out here 🎵. It was a simple process of picking the theme of the song (happy, love etc) and then writing some versus and then picking a melody for each verse out of the options the AI prepared.
Brainstorming on Quantum COTS and the Military
Before getting on the AI, Quantum and Cyber Threats panel I had a chance to speak one-on-one with my co-panelist Dr. Mark Jackson who is a theoretical physicist and quantum technologies entrepreneur. I understand that quantum communications will bring us super secure communications and that quantum computing will mean crazy fast computing, but my question was when is that happening? When will that be real?
[Side note: I recommend this Quantum Technologies Primer for National Security Professionals]
Dr. Jackson was kind enough to entertain my questions, I asked him when will we have quantum communications that are more commercially available? Apparently that already exists and the company he works works for (Cambridge Quantum Computing) already has technology that make it as simple as here is a quantum communications add on device that slides right into your server rack (granted with some coding and protocol updating). So then the next question was when can we have crazy fast in-house quantum computing? Quantum computers require frigid temperatures (think 0 Kelvin/ -273.15C or -459.67F) so that is not possible in a regular office and far too costly, which means we won’t have quantum computing in our laptops anytime soon. But he said that we already can have access to quantum computing via a proxy quantum computer and referenced IBM who already offers this. The way that ‘quantum cloud service’ works is that you send over everything you need computed, such as massive amounts of data that would take years for a classical computer to churn through, and then the quantum computer crunches through the data in (let’s say) half an hour and then the results are sent back. Dr. Jackson gave me two industries he found this service to be very useful for: pharmaceutical research and finance.
Military Use Case: My take-away from this exchange is that quantum COTS is here. I started to think of military use cases and given the proliferation of sensors, the many multi-domain moving parts, implications delivering effects in Anti-Access & Area Denial A2/AD environments, electronic warfare, and satellite assets to name a few things, I think that operational planning can benefit from quantum computing because (assuming it is programmed to do so) it can churn through massive amounts of data and infinite number of scenarios and produce several courses of action with damage assessments.
“The speed of war has changed, and the nature of these changes makes the global security environment even more unpredictable, dangerous and unforgiving. Decision space has collapsed and so our processes must adapt to keep pace with the speed of war.” — General Dunford, Chairman of the Joint Chiefs of Staff
Given the higher tempo of conflict, the need for quicker decision-making, quantum cloud computing could prove to help military leadership keep pace with the speed of war.
In the segment on ‘Unintended Consequences of AI’ moderated by UN Director of Disarmament Affairs Anja Kaspersen, I was asked to address the fears of lethal autonomous weapons systems and concerns that it was uncontrollable. I took that opportunity to explain that while nations are indeed investing in autonomy across the military spectrum of operations, it is important to remember that weapons (autonomous or not) are deployed by humans, and that those humans in uniform are professional soldiers who have been trained in doctrine, military ethics and the law of armed conflict (LOAC), and that as per command responsibility, a human will be responsible for those actions. I reminded them that should the technology not be predictable, a commander will not find it desirable to use it, and that there are many circumstances where autonomy is not preferable.
The Art Corner
Exhibiting my #ArtAboutAI at the AI for Good Global Summit was my first public showing of my art and it couldn't have been a better place to make that debut. It was great to have conversations about it with different people from all walks of life!
For those who are hearing about #ArtAboutAI for the first time, it is series of educational art pieces meant to raise awareness about the technological advances of the technology and research. You can see all the pieces and explanations here.
I also told my visitors about my game Sapien2.0 which is meant to raise awareness on technologies that will change our human experience from birth to death.
It is available in a mobile friendly format and is currently available in five languages: English, Spanish, Russian, Chinese and Arabic.
Some technologies featured in this game include artificial uterus, whole brain emulation and DNA Editing.
In the art corner I was in good AI art company, with photographer Jeff Rovner who photographs vintage toys that are futuristic or robotic in nature and super imposes machine learning code etched in the glass on top of the printed photograph.
State Studio was there in full force with a both Christian “Mio” Loclair’s artificial intelligence looking at itself installation Narciss and Roman Lipski’s work where he leveraged AI to be his feedback loop of his own art work and give him inspirational new ideas.
Conclusion: This is a very small snapshot of everything that took place at the Summit, the videos will be posted by ITU in the coming weeks and you’ll be able to watch the full program.
Dr. Lydia Kostopoulos (@Lkcyber)consults on the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Formerly the Director for Strategic Engagement at the College of Information and Cyberspace at the National Defense University, a Principal Consultant for PA and higher education professor teaching national security at several universities, her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series [#ArtAboutAI], and a game about emerging technology and ethics called Sapien2.0 .