Parenting Through the Looking Glass

The choices of a new generation

Nicholas Teague
From the Diaries of John Henry
16 min readAug 6, 2018

--

I recently was having a conversation with Josue, a friend from my Houston church. He had seen me reading some really geeky book on deep learning or the like and the discussion turned towards the future of technology, and of course no such topic is complete without at least a consult with the projections of science fiction. I mentioned that I thought the best science fiction on TV right now is a Netflix series called Black Mirror, which we had both seen and debated for a while which were the best episodes. As is the sign of a good pastor, he started to steer questions toward the intersections of these stories and those aspects of faith or morality that could be the focus of a sermon. This particular church has a neat history of asking these kind of questions, our senior pastor has even published books seeking lessons of spirituality from what might be unexpected works of fiction like the HBO series the Sopranos — which for the uninitiated is not just a story about mobsters, but is also a gripping drama addressing themes like family, psychology, and loyalty. As the debate over what we could learn from science fiction progressed, Josue had the spark that this kind of address could make for a neat group discussion, and long story short we ended up agreeing to put together an ad hoc meeting where we would invite a few friends to watch an episode of Black Mirror as a group, and then both of us would follow with a presentation — where I would talk about the technical aspects of the episode and he could touch on more biblical themes. After a little back and forth we ended up settling on the episode Arkangel. The following represents my prepared slides and talking points for this address. And without further ado.

Black Mirror is a British science fiction series produced by Netflix. The episodes consist of self-contained stories set in periods ranging from the near to distant future. A common theme is that the plots center around projections of future paradigms of technology seeded by those aspects of tech that are just emerging today. For example some points of address have included the encroachment to our relationships of gamified social media interactions, alter-egos enabled by virtual worlds of massive multiplayer gaming, or direct brain channels for computer interface. It’s certainly worth note that the tone of this series tends to be fairly dark, in some cases extremely so, as the stories ask questions not just of where might these technologies take us, but also how might they go wrong. This presentation will mostly avoid those darker themes and instead focus on those aspects of a particular episode which are grounded in modern tech.

A general commentary of the genre of science fiction is that I find the best stories don’t just focus on fantastical realizations of space travel or new technologies for instance, but will balance these aspects with incorporated themes of timeless elements of human nature — whether those themes be political, romance, or the struggles of an underdog. There should be at least one protagonist that allow us to view what may be a strange new world through the eyes of a timeless prism. We need someone to identify with or root for lest the stories become too surreal and unrecognizable.

This Arkangle episode incorporates an incredible range of future technologies in the story, but at its core it is a timeless allegory about parenting and the relationship between a single mother and her child. The story asks questions about the risks of helicopter parenting, of what happens when a child is raised in an overly protective environment, what happens when they are not allowed to make their own mistakes. As this child matures and encounters peers of bad influence we see how the well-intentioned interference of an overbearing parent may become counterproductive. In the end a central question of this episode is the development of a child’s personality when subjected to an extreme sheltered existence, and how that may handicap their ability to deal with temptations as they become an adult. This is a story about psychology.

The technological aspects of this episode cover a wide gamut, but at their core explore potential capabilities of computer interactions with a direct brain interface. The realization demonstrated in the episode is a fairly seamless one of a child implanted with some brain technology allowing for the monitoring of sensory experiences via remote terminal. This brain interface doesn’t only facilitate third party monitoring, it also serves to directly interface with those same sensory inputs experienced by the recipient, such that subsequent encounters with adult themes like violence, sexuality, or trauma are censored by obscuring those parts of vision or audio which graphic overlays or audio obstruction. The actual implementation of such a brain implant is abstracted over in the telling, but what’s really neat is that some of the core technologies that would enable these type of capabilities are already being explored by the scientists of today. This presentation will highlight some aspects of categories of technology that could be precursors to the paradigms of Arkangel. We’ll address from a high level categories of brain science, sensory monitoring, sentiment analysis, and augmented reality.

I should probably preface this section by a disclaimer that I don’t consider myself an expert on the human brain, this is our body’s most complex organ after all, and scientists may spend entire careers studying only potions of it’s functioning. However in the course of my explorations have found there are a few high level features of architecture that I think may help to better intuit the system, and so will briefly offer here highlights.

No discussion of the brain would be complete without a description of neurons, the fundamental building blocks of computation. Each neuron is a single cell which interacts with surrounding neurons by way of electrical pulses of varying frequency. The human brain has billions of neurons and trillions of synapses (the interface points between neurons). The brain selectively and continuously adapts each individual neuron’s firing intensity based on realtime exposures in a process known as selective adaption. If neurons’ only means of interaction were these electrical pulses they might be easier to simulate, however neurons also are subject to influence of more subtle chemical and hormonal aspects as well.

Scientists have succeeded to some extent in mapping regions of the brain based on their specializations, using tools like MRI machines to measure neuron activation rates when exposed to various sensory inputs. From what I’ve gathered each of these regions may have higher rates of connectivity between neurons within their boundaries, although there will be degrees of interconnectivity crossing these boundaries throughout. An important distinction can be made between those aspects of our brain that make up our sensory motor process which have some similarity to a modern machine learning architecture known as feedforward neural networks in which a signal is transformed as it is fed through progressive layers of interconnected neurons. Other aspects of the brain such as those that form memories bear more relation to the architecture of attractor networks with a collective progressions of firing patterns for a set of neurons.

In a multi-disciplinary scientific specialization known as complexity theory, researchers will study systems which exhibit what are called emergent properties. One of the key findings of this realm is that for systems of sufficient complexity it is impossible to understand the operation of a collective merely by studying those individual components from which a system is comprised. There are irreducible behaviors of these systems that arise with increased scale, interconnectivity, or frequency domain of interplay for instance.

Consider this photograph that I took during the sunset departure of a bat colony from the bridge under which they reside in Buffalo Bayou park. Each of these bats acts independently based on audio sensing of proximity and movement of its immediate neighbors. If you ever get a chance to visit I do recommend, it’s a really neat experience (if you can’t make it to Houston there’s another colony on campus at the University of Florida). One thing to ponder is that the movement of the bats from under the bridge doesn’t happen simultaneously, there is a progression of a stream of bats even though there is no “queen” that is directing the group. If all bats left simultaneously they would crash to the ground in a mass of entanglement, so there is some coordination, but only one that arrises from each bat acting independently in a probabilistic fashion. Consider too that if you visit this colony on multiple days you’ll find that the direction that they travel changes from evening to evening, yet even though those earliest bats to depart have no interactions with those that later follow the colony collectively make decisions.

The analogy I’m trying to draw is that our brain cells operate in a fashion loosely comparable to the bat colony. Each neuron is an independent agent, only interacting with those other neurons to which it is directly connected. The experience of consciousness is not one realized merely through the actions of specific neurons, it is through the collective interplay of mass networks of these emergent systems, of both the feedforward and attractor networks which comprise our brain and neural system, that we may experience consciousness and interact with coarse grained representations of our environment.

convolutional filter layer progression demonstration via [link]

The episode’s feature of monitored sensory input measurements opens a whole can of worms, so before addressing the external aspects of sensory monitoring let’s first try to walk through some basic elements of the operation of the human visual cortex. It turns out that we have some help here in that a specific architecture of modern computerized machine learning algorithms was partly built to mimic elements of operation from a brain’s visual cortex. That machine learning architecture is known as a convolutional neural network, and it operates as a kind of feedforward network in that an encoded image is fed through a progression of interconnected neural network layers for transformation to some interpretation, with such transformation taking place via the numerical weightings and activations from each artificial neuron’s signal, which is analogous to the transformation of a biological neuron’s input to some derived output electrical pulse firing rate. If you study the progression of these neural layers in a convolutional neural network, you’ll find that each is derived from the process of an algorithmic supervised training operation such that the network may recognize features of increasing complexity in each layer as the image progresses through the network. For example, an early row may detect edges from an image, a subsequent row may detect curves or straight lines from those edges, another row may detect simple shapes from those curves and lines, and late rows in the network may detect features of increasing complexity such as faces, houses, or cars.

The brain’s visual cortex turns out to operate in a similar fashion, as optic nerves in the eye transform images into encoded electrical signals upon which neurons in the visual cortex recognize edges and shapes. I’m partly projecting from my somewhat limited knowledge here, but I believe one key difference is that while a modern artificial convolutional neural network operates primarily through the progression of an image through a feedforward neural network’s interpretation, in a brain the recognition of features of highest sophistication take place outside of the visual cortex via the interplay from it’s feedforward aspects with other lobes’ attractor networks which store representations of memories and learned properties. So the brain’s visual cortex is like a convolutional neural network’s early layers that detect simple features of an image which are then interpreted by other regions of the organ.

via TEDx San Francisco

Having established some of the building blocks of the brain’s visual processing, let’s now address the more futuristic application of sensory input monitoring, and to do so we’ll turn to an excellent 2017 TED talk by Jack Gallant on the very subject. The idea of this demonstration is that once we map an individual’s brain firing patterns in response to stimuli using some neuron firing measuring device (which could be a fMRI or some future high fidelity EEG device for instance), we can then interpret and decode what that brain is subsequently experiencing based on realtime measurements of neural activity in response to some unknown stimuli. Here we see a demonstration dating as far back as 2011, in which a person’s view of some movie is recreated based on neural measurements, with the images on the left being the video stills depicting an elephant paired with corresponding image below filtered to only show that image’s edges, and the images on the right being the decoded neural activity attempting to recreate the subject’s vision. Obviously the fidelity of this demonstrations recreations leave a little to be desired, but consider how far computer generated graphics in children movies have progressed since Pixar’s 1995 release of Toy Story for instance.

via TEDx San Francisco

I’ll highlight one more example from Jack Gallant’s excelled TED talk as I think it illustrates further what kind of capabilities will come into fruition in this space. In this example the image on the right is the video being viewed by a subject whose brain has been previously mapped, and it is the key words appearing on the left that are decoded from neural activity. Notice how this demonstration (a more recent from 2016) is not only accurately describing the features of the movie, but it is also doing so with text weightings to demonstrate prevalence of a feature in the image coupled with the corresponding spacial configuration. I speculate that this decoding is making use of monitoring more regions of the brain than the example preceding, which may have been specifically monitoring those lower complexity features captured by the visual cortex, while this representation pairs that spacial representation with extra-visual neural activity for high dimensional categorization of features (although another explanation could be that in the space from 2011–2016 the fidelity of decoded images from the visual cortex has climbed dramatically allowing for a more traditional machine learning fueled image classifier).

Turning back to the context of the Arkangel episode, a key takeaway here should be that it is possible even today (albeit with complex machinery for the type of fidelity demonstrated here) to externally monitor sensory perceptions based on measurements of brain activity.

Now that we have decoded the sensory perceptions of our subject, the classification of features according to content is much more straightforward using technologies of today. Neural network classifiers can be deployed that given some input such as images, audio, or text outputs feature specific categories based on those derived from a supervised training operation for the algorithm. By supervised training I mean that our classifier will need to be prepared by developing potentially a big data scale set of sample inputs labeled according the those categories we wish to categorize — for example if we wanted to classify according to adult themes one solution could be to train an image classifier with appropriately labeled video stills from Netflix’s collection of PG and R rated movies. Through the algorithmic “training” using these demonstrations our neural network would learn what types of image features are more prevalent in child-appropriate material while also recognizing those more adult themes that are only found in R rated scenes. A limitation of this approach is that our neural network will only be generalizing those type of features that are found in the training material. There will be no common sense evaluation. Given some Renaissance master’s painting featuring nudity for instance it is likely that our classifier would deem this inappropriate for children even though common sense would tell us that this is high art and appropriate as a part of any adolescent’s education, similarly innocuous joking between peers about flatulence, boobs, or other edgy themes may be a worthwhile part of a teen’s education even if this hypothetical classifier probably won’t find in it’s PG movie derived training set. I’ll close this slide by noting that our classifications aren’t necessarily limited to binary appropriate/inappropriate labeling, depending on how far we take our training data preparation the range of features that our classifier may test for could be considerable.

So we’ve demonstrated our sensory monitoring, and we’ve classified that data based on adult content, the final piece to recreate the capabilities of the story would be to overlay generated features to obscure or otherwise transform what is being experienced by our subject. The potential for planting or overlaying sensory input directly into the brain is the most fantastical element of this fictional show. However if we were to allow our subject to experience their sensory input through an external channel such as a audio/visual virtual reality rig, it is entirely possible to perform these type of manipulations shown in the episode using the technology of today.

An important distinction on this type of capability should be considered for those categories of virtual reality and augmented reality. A virtual reality headset completely obscures external vision by replacement with near-eye displays — potentially realized with the mounting of a high resolution smartphone screen for instance. Part of the trick of making this virtual reality experience immersive is by taking advantage of sensors such as the accelerometer in a smartphone to track head position which enables correlating orientations of a 3D and 360 degree video or game engine to physical movements. Note that there are already several apps available the various app stores which offer these immersive video capabilities with the only hardware accessory required potentially a cheap smart phone mounting headset, although for more intensive tasks such as video games dedicated hardware may be required. I speculate it is possible that the cell phone chipsets and graphic processing capabilities will eventually catch up with what is currently on the market for video game console virtual reality environments.

The features of virtual reality are expanded in the application of augmented reality. A defining feature of augmented reality hardware is the mixing of visual input between actual surroundings with an interactive 3D video overlay. In order for the deliberate mixing of these two environments, the real with the virtual, it is required that the augmented reality set develop a realtime 3D model of surroundings. Many self-driving cars develop this capability for 3D modeling of surroundings with a laser based sensor similar to radar called lidar, although with modern machine learning technology I expect it will increasingly become possible to infer 3D models using video camera input. When coupled with modern machine learning tools for interpretation of images an augmented reality rig can selectively overlay graphics tailored to the specific contents of surroundings. Applications could include the well-intentioned but misguided protective censoring depicted in this episode, immersing gaming, or potentially other high value use cases like car windshield overlays to facilitate safe driving or industrial / medical specialist augmentations.

via TEDx San Diego

I’m not much of a video-gamer myself, so for an illustration of the applications of augmented reality will turn to an informative and at times moving TED talk presented by Brian Mullens on the subject. Here Brian demonstrates an example of industrial augmentation for purposes like repair or installation. This augmentation could save money in several fashions such as reducing technician training requirements, speeding up the efficiency of workflow, as well as contributing to a safer work environment through direct supervision of safety procedures such as lockout/tagout or other precautions. I can imagine a scenario of some technician with a minimal level of skill entering a new work environment and becoming productive from day one.

via TEDx San Diego

Mullens also offers a use-case of an otherwise even more training intensive category of specialization — medical or surgical procedures. Here it’s not the safety of the specialist that comes into play, but the patient upon which a procedure is being performed. Mullens notes that it doesn’t even necessarily need to be those most complex procedures that are addressed here, but given that there is a shortage of doctors needed for potentially life-saving procedures in a large portion of the developing world, this type of augmented reality could save millions of lives by facilitating healthcare to those in need.

Steve Jobs — Walter Isaacson

I’ll close with this quote because I think it speaks to an idea of computing that covers this and many paradigms to come. Steve Jobs’ vision for the computer was never just about keyboards, mice, and monitors — it was about expanding the horizons for humanity.

Steve Jobs on computers

*For further readings please check out my Table of Contents, Book Recommendations, and Music Recommendations.

Books that were referenced here or otherwise inspired this post:

The Gospel According to Tony Soprano — Chris Seay

The Gospel According to Tony Soprano

Steve Jobs — Walter Isaacson

Steve Jobs

(As an Amazon Associate I earn from qualifying purchases.)

Albums that were referenced here or otherwise inspired this post:

Tuesdays, Thursdays, and Saturdays — Jimmy Buffett

Tuesdays, Thursdays, and Saturdays

(As an Amazon Associate I earn from qualifying purchases.)

Hi, I’m an amateur blogger writing for fun. If you enjoyed or got some value from this post feel free to like, comment, or share. I can also be reached on linkedin for professional inquiries or twitter for personal.

For further readings please check out my Table of Contents, Book Recommendations, and Music Recommendations.

--

--

Nicholas Teague
From the Diaries of John Henry

Writing for fun and because it helps me organize my thoughts. I also write software to prepare data for machine learning at automunge.com. Consistently unique.