Security, Machine Learning and Acting
Over the last year, I’ve actively absorbed a great deal of knowledge in both Machine Learning and Security. This grew out of a deep desire to pursue some combination of the two. I’m not an expert by any means, and am definitely still a baby fish in the big data lake, so my active pursuit of both practical & theoretical technical knowledge doesn’t stop.
I have, however, had quite an extensive foray into attempting to understand the human psyche, primarily from the years of my pursuit of very theoretical approaches to stage acting. Human logic is what excites me the most, and drives me to keep learning more; from maths to theoretical computer science.
As a result, I’ve most recently become a guest writer for StatusToday’s Medium blog — “Thoughts by StatusToday”. It’s a security startup in the current cohort at Entrepreneur First, focusing on human behaviour and cyber security, and using AI to bridge that gap.
TL;DR — Yesterday, my first piece of guest-writing for their blog was published; a very high level approach to why we should humanise security, and a little steer towards machine learning (or how they intertwine). (By “humanise security”, I mean that we should commoditise the “human-aspect” of security.)
Give it a read at let me/us know what you think by writing a response on Medium!
What does Acting have to do with AI?
One of my favourite quotes comes from Uta Hagen’s ‘A Challenge for the Actor’, which I read about 5-ish years ago (when I was 16), and still drives me (though not as an actor, but as a mathematician/computer scientist) today:
Theoretically, the actor ought to be more sound in mind and body than other people, since he learns to understand the psychological problems of human beings when putting his own passions, his loves, fears, and rages to work in the service of the characters he plays. He will learn to face himself, to hide nothing from himself — and to do so takes an insatiable curiosity about the human condition.
How ironic that most of the actors we see may not necessarily be the most “sound in mind and body”. More interestingly, this is the key to my interests in AI & Security.
I’ve always been obsessed with intelligence. Not in the sense of achievement, but to answer the question; How is intelligence developed and deepened? How can we model intelligence?
Perhaps a way to think about unlocking this is to view the discipline of AI as a way to teach computers to act. To teach them to learn to understand the “psychological problems of human beings”, or taking a step back to basics, the formation of logic and our intelligence.
“Computers are basically bit strings, how can they even begin to understand us?” you might proclaim. We’ve come so far in being able to write algorithms that can predict outcomes from data or describe characteristics of data. So if we think of data as simply the storage of information with (most of the time, hidden) knowledge on the computer, it all kind of becomes a little clearer.
What about Security?
Security can also be approached in from a similar vein. Humans created computers, we write the very software we use. We could thus argue that our code is a reflection of us. Security breaches are caused attacks on vulnerabilities in our code, or malicious insider threat.
These are people problems. And therefore it only makes sense to attempt to satiate “an insatiable curiosity about the human condition”; to humanise security, or to commoditise the “human-aspect” of Security. I attempt to formulate a more thorough and coherent answer to this big question in my Medium post, and begin to explore subsequent question — ‘Is it even possible?’
This is only the first of a series of blog posts to come, and I hope to continue following on from the post. Next, with more focus on practical machine learning, and the efficient exploration/analysis of data.
So let me know — What do you think? :)
(As always, thoughts are not representative of current employer views, but simply my own.)