Let me start by saying I love my Apple products; I have a MacBook, an iPhone and an Apple Watch. When this article started out by saying we brought a tech journalist into the room instead of the industry expect, it immediately lost credabilty. I’m sure he’s a very smart guy, but he doesn’t know much about ML compared to the people writing these publications and pushing out actual ML product. A few things that are features, not ML would be…
- The ability to locate your car on a map after you disconnect from Bluetooth is not ML.
- Showing the location of a photo and grouping them based on location is just matching POIs to geocordinates. Not ML again.
- Showing your most frequently used apps on a search screen is VERY basic pattern recognition combined with when you last opened it. I can’t count this as ML.
- Detecting dates in your email, using very basic date patterns (MM/DD/YYY for example) and putting them on a calendar is just string matching. Not ML.
- Having a DB of all numbers from your email/text messages and looking for strings that look like names then presenting them to the user when an unknown number calls is not ML.
- Adding 5 VERY specific types of third party apps to Siri and thinking of synonyms to phrasing or activating those third party apps is not ML.
- The ability to detect similar faces is getting there! I give them credit for that, it’s impressive to identify certain objects as well. My issue with this is you have to assign the faces to people on every device. So the more Apple devices, the more individual learning has to occur.
Don’t get me wrong though! These are amazing features that add value to a ton of users, but to call them machine learning would be a disservice. They are human created features that might involve small patterns, but machine learning is where a machine detects new patterns by itself. That’s just not what Apple is doing here.
Also, ML is based on huge data sets because that’s how you learn. This is something that Apple just doesn’t have access (or doesn’t want to have access to) to teach it’s machines or give them a personal touch to each user.
Apple does some ML stuff, but not compared to Microsoft, Google and Facebook. The advantage to ML in the cloud is once one dicovery is found (ML learns to identify a pencil for example) then every device knows how to dicover a pencil. With Apple, when one device learns what a pencil is, then Apple needs to code it into software, push out a firmware update and then take it from there.
Just my two cents ¯\_(ツ)_/¯