As Augmented Reality (AR) technologies improve, we are starting to see use cases that stretch beyond marketing and simple visualizations. These include product visualization, remote assistance, enhanced learning, quality control and maintenance.
Apple’s Measure is one of my favorite AR apps. It’s a simple and reliable way of taking physical measurements with your smartphone and it demonstrates how robust AR tracking has become.
It’s no secret that the field of machine learning is considered heavy on theory: Math, statistics and computer science, mainly. While practitioners tend to enjoy the depth, colleagues and clients “outside the bubble” find it difficult to participate in the conversation. That results in bad communication.
Projects go bad when the communication goes bad.
Bad communication distorts expectations, impacts planning and, in general, it makes your team less efficient. As a practitioner, this is your problem too.

📊 Wrangler of Big Data™.
🧠 Wielder of ✨Magical Learning Algorithms✨.
🤔 Clever and curious.
😅 Difficult to talk to (if you’re not into ML). …
We often hear that Big Data is the key to building successful machine learning projects.
This is a major problem: Many organizations won’t have the data you need.
How can we prototype and validate machine learning ideas without the most essential raw material? How can we efficiently obtain and create value with data when resources are sparse?
At my workplace, we produce a lot of functional prototypes for our clients. Because of this, I often need to make Small Data go a long way. In this article, I’ll share 7 tips to improve your results when prototyping with small datasets.
When machine learning techniques are used in “mission critical” applications, the acceptable margin of error becomes significantly lower.
Imagine that your model is driving a car, assisting a doctor or even just interacting directly with an (perhaps easily annoyed) end user. In these cases, you’ll want to ensure that you can be confident in the predictions your model makes before acting on them.
Measuring prediction uncertainty grows more important by the day, as fuzzy systems become an increasing part of our fuzzy lives.
Here’s the good news: There are several techniques for measuring uncertainty in neural networks and some of them are very easy to implement! …
This is the last part of my article series on “Human-Like” Machine Hearing: Modeling parts of human hearing to do audio signal processing with AI.
If you’ve missed out on the previous articles, here they are:
Background: The promise of AI in audio processing
Criticism: What’s wrong with CNNs and spectrograms for audio processing?
Part 1: Human-Like Machine Hearing With AI (1/3)
Part 2: Human-Like Machine Hearing With AI (2/3)
Understanding and processing information at an abstract level is not an easy task. Artificial neural networks have moved mountains in this area. Especially for computer vision: Deep 2D CNNs have been shown to capture a hierarchy of visual features, increasing in complexity with each layer of the network. The convolutional neural network was inspired by the Neocognitron which in turn was inspired by the human visual system. …
People have been wanting to talk to computers for a long time. Thanks to deep learning, speech recognition has become significantly more robust and significantly less frustrating — Even for less popular languages.
At Kanda, we set out to examine speech recognition and natural language processing (NLP) techniques to make fluid conversational interfaces for augmented reality (AR) & virtual reality (VR). In Danish.
The Danish language is notoriously hard to learn. That goes for both humans and machines. Here’s a couple of insights we gained along the way.
Speech recognition is a task that humans are really quite good at. Human-level speech recognition is often cited with a measured 4% word error rate based on Richard Lippmann’s 1997 paper, “Speech recognition by machines and humans” [1]. …
Hi, and welcome back! This article series details a framework for real-time audio signal processing with AI which I have worked on in cooperation with Aarhus University and intelligent loudspeaker manufacturer Dynaudio.
If you’ve missed out on the previous articles, click below to get up to speed:
Background: The promise of AI in audio processing
Criticism: What’s wrong with CNNs and spectrograms for audio processing?
Part 1: Human-Like Machine Hearing With AI (1/3)
In the previous part, we mapped the fundamentals of how humans experience sound as spectral impressions formed in the cochlea which are then “coded” by a sequence of brainstem nuclei. This article will explore how we can integrate memory when producing spectral sounds embeddings with an artificial neural network for sound understanding. …

Significant breakthroughs in AI technology have been achieved through modeling human systems. While artificial neural networks (NNs) are mathematical models which are only loosely coupled with the way actual human neurons function, their application in solving complex and ambiguous real-world problems has been profound. Additionally, modeling the architectural depth of the brain in NNs has opened up broad possibilities in learning more meaningful representations of data.
If you’ve missed out on the other articles, click below to get up to speed:
Background: The promise of AI in audio processing
Criticism: What’s wrong with CNNs and spectrograms for audio processing? …
In recent years, great results have been achieved in generating and processing images with neural networks. This can partly be attributed to the great performance of deep CNNs to capture and transform high-level information in images. A notable example of this is the process of image style transfer using CNNs proposed by L. Gatys et. al. which can render semantic content of an image in a different style [1].
The process of neural style transfer is well explained by Y. Li et. al: “this method used Gram matrices of the neural activations from different layers of a CNN to represent the artistic style of a image. Then it used an iterative optimization method to generate a new image from white noise by matching the neural activations with the content image and the Gram matrices with the style image” [2]. …

This week I went for a trip to Tallinn, Estonia. What was most exciting about the city this week was not it’s beautiful old town, intense winter or mild-mannered people, however — It was the happening of North Star AI, a machine intelligence conference for developers.
The speaking schedule was populated by champions of AI & CS such as Travis Oliphant (creator of NumPy), Sayan Pathak (principal ML scientist at Microsoft) and Ahti Heinla (co-developer of Skype and co-founder of Starship) among many others.
I came with a desire to explore new perspectives, meet smart people and learn from the best — You might have seen me there, eagerly taking notes. In this article I will attempt to distill five lessons taken home from the experience. These are patterns of commonality in topics discussed by the speakers that stimulated valuable insights for me. Be advised that these lessons do not necessarily reflect the views of the speakers, rather my interpretation and aggregation of their points. …

About