What I learned from hiring over 70 technologists and conducting more than 300 interviews

Image for post
Image for post
Image by www_slon_pics

I strongly believe hiring is one of the most important things a manager will do. As the company evolves, the team is the essence of what the company becomes, not the other way around. Companies are created by the people that it hires, and not from the business plan. However many companies leave it up to inexperienced managers to hire without much guidance. These tips come from my personal perspective and experiences (I’ve made many of the mistakes I’m describing, or seen other people make them). I’ve hired over seventy people, done more than three hundred interviews, and have evaluated thousands of CV’s and cover letters. (the vast majority of those developers, but I’ve also been involved in the hiring of product owners, UX experts, data analysts/scientist, scrum masters and other roles (as a hiring manager or consultant). With many of these people I worked for years, so I do have a view on what works and what doesn’t. …


Every month I publish a blog post to summarise what I think are the highlights in content on AI & data at large. It’s my personal notebook on interesting content. Previous editions can be found here: March, April, May, June

1. Article: Eye Movements During Everyday Behavior Predict Personality Traits

Personality tests are almost always based on the big five; agreeableness, conscientiousness, extroversion, openness, and neuroticism. Four out of five can now be predicted based on eye movement: agreeableness, conscientiousness, extroversion, and neuroticism (+perceptual curiosity). The study already had a high accuracy, but the researchers suggest to further optimize for even higher accuracy. The implications for this are enormous. …


1. Article: Great Power, Great Responsibility: The 2018 Big Data & AI Landscape, Matt Turck

Matt Turck and Demi Obayomi made a fantastic overview of the 2018 landscape.

Image for post
Image for post

2. Article: The Cost of Developers, Ben Thompson Stratechery

Probably the biggest news of our industry of June was that Microsoft bought Github for 7.5B dollar. That’s 4,6x as much as Google paid for YouTube, 10x what Facebook paid for Instagram. (WhatsApp was bigger with 19B.) I think Thompson’s analysis on the deal makes a lot of sense. While very expensive, Microsoft had to do something.

3. Podcast: Teaching computers to see with Dr. Gang Hua, Microsoft Research Podcast

Dr. Gang Hua explains in layman's terms how quickly computer vision progressed over the last decennia and the current challenges.

4. Article (Dutch): Wat als de melkboer artificiële intelligentie inzet?, VRT

From a practical perspective, computer vision has a lot of potential to be quickly impact many industries. While many other segments of AI need lots of data to start with (for instance NLP) with computer vision you can just gather a lot of data on the fly with cheap camera’s. This allows for more SaaS(ish) solutions (with accompanying hardware) and therefore much lower implementation costs. This is a huge game changer for farmers (and for many other industries). …


Every month I will publish a blog post to summarise what I think are the 10 highlights in content on AI & data at large per month. I write this monthly overview with two objectives in mind. First to structure and process my own thoughts, and second to give back to the community. You can consume years worth of great content, just for free. There are some great initiatives that gather the best articles, and they have been very helpful to me. I hope this monthly blog is helpful to you as well.

1. Google Duplex content

Image for post
Image for post
Screenshot of the Duplex tech demo

Yes, this month was really about the Google Duplex tech demo. …


Every month I will publish a blogpost to summarise what I think are the 10 highlights in content on AI & data at large per month. I write this monthly overview with two objectives in mind. First to structure and process my own thoughts, and second to give back to the community. You can consume years worth of great content, just for free. There are some great initiatives that gather the best articles, and they have been very helpful to me. I hope this monthly blog is helpful to you as well.

1. Book: Hans Rosling — Factfullness

Yes, Hans Rosling is that enthusiastic guy explaining (debunking) the relationship between wealth and population growth. Unfortunately he died this year. However, he was writing this book “Factfullness” just before he died, and his son Ola Rosling and daughter in law Anna Rosling finished the last part. It’s by far the best book I read in 2018 so far, and one of the most impactful books I read in a long time. Backed up with data he explains how the world got a better place and continue to improve at an incredible rate, despite popular belief to contrary. His writing style is just as infectious as his video lectures. …


1. Article: “What worries me about AI” — Francois Chollet.

One of the dominant topics of March was AI safety and the fallout of Cambridge Analytica’s sneaky use of Facebook Graph. The irony didn’t escape me that this is a Google employee that’s bitching hard about the industries’ (and Facebook in particular) irresponsible behavior to data safety. It’s a good read regardless, although it doesn’t provide a good answer to the obvious question what is Google is doing about it.

On the same topic, Yonatan Zunger (Ex-Google now Humu) wrote this short piece that is also worth reading. …


Often AGI, or ‘human-level AI’ is considered the holy grail of AI research. It’s even in the name, human level AI. Why are we so obsessed with getting to a human level? What’s the relation of AI to humans? And what’s our role in a symbiotic relationship between human and computer?

Image for post
Image for post
Robot Sophia and Einstein on stage at Web Summit

How to built a machine that can improve itself, not on one task, but on many tasks? Brain researchers and AI researchers alike note that the only model we currently have of anything close to AGI (artificial general AI, a.k.a. human-level AI) is the human brain. The way our brain is built, how each neuron has thousands of synapses, is a great source of inspiration as long as we lack better alternatives. Our brain for instance filters very effectively to allow us to have a lot of input from our surroundings (for instance sensory input), but compute these on the necessary speed with limited capacity. We are able to learn without enormous amounts of data available to us. Plus the brain is very flexible, especially compared to current AI systems, that are currently very narrow. It’s not for nothing that we measure artificial intelligence to our own intelligence. The Turing Test is the most literal form. Goertzel et al. introduced two new tests in “The Architecture of Human-Like General Intelligence”.
The coffee test, and the robot college test. Perhaps the most interesting variation on the Turing Test comes from Nilsson, the employment test.
“To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines.” …


A
s with most tech conferences these days, and probably the years to come for the matter, the buzz is all about artificial intelligence. Maybe it’s because of the level of the attendee’s but I found that it’s quite difficult to have a meaningful discussion on the topic. There are many debates going on, people don’t speak the same language, and have a different meaning for the same terminology. On the positive, there is a lot of progress on the subject, on all axis. …

About

Han Rusman

Product @ Quin

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store