Apple and its plan with Artificial Intelligence
Apple is a company that has excelled in hardware innovation such as computers, iPod, iPhone among others, even in software development has had a leadership in operating system, video editing software and some success in productivity software such as Pages, Keynote and Numbers, however little is known about Apple and its plan with Artificial Intelligence, how to use it in its products and how it can be a catalyst for developers who seek to generate models to use them in applications in general.
Artificial intelligence in software:
Apple implements machine learning models in their devices using specialized software for this type of activity, this is accompanied by electronic parts that complement these activities, but we will review Apple’s neural engine in the section for hardware.
Among the main software functionalities that can be found in Apple are:
- FaceID. For this technology, facial recognition patterns generate a specific model for your face, which allows it to be identified to unlock or perform functions on your phone.
- FaceID in HomeKit. This functionality on the part of Apple development is interesting, since it can save models and patterns of facial recognition within your devices, so that when someone rings the doorbell in your house, the camera that is connected to HomeKit, is able to identify to the person in front of your house.
- Writing on iPad with Apple Pencil. With machine learning models and large amounts of typing data, a stroke can be identified and translated into an ASCii source character.
- Hand washing. Given that we are in 2020 and COVID care is an essential part of everyone’s day, the Apple Watch is capable of detecting when someone is washing their hands, these models allow a 20-second counter that “guarantees” effectiveness of hand washing.
These are just some examples of apple applications and artificial intelligence, however based on their latest announcements at the World Wide Developer Conference, it is almost safe to assume that in each of their software elements there is at least one model of machine learning or deep learning in operation.
Siri and suggestions. This has been the front end of artificial intelligence in a most notable way for several years now, since it allows having an assistant that can receive instructions, but recently it has increased the usefulness of being able to intuit applications, actions or events that the user may require at a given time.
However, like many digital assistants, this has generated quite a bit of disappointment regarding the quality of results of artificial intelligence models, which gives us a clear idea of how far we are from real artificial intelligence.
One of the big problems that digital assistants face, apart from technological challenges, is the fact that they require the collection of new data in order to generate new models or “retrain” existing ones.
This involves requiring users to collect data so that these elements become patterns of artificial intelligence in the devices.
At the beginning, digital assistants, in this particular case Alexa, were in trouble because the information was not stored within each device, but the information of each user was centralized on Amazon servers so that the models were perfected on a large scale, not just for each user.
For an artificial intelligence developer, being able to store or obtain this amount of information makes sense, but the privacy of each user comes into play and it becomes a rhetoric regarding the limits that must be followed to obtain this data.
Currently, as security limits, data can be sent or stored locally, it is the user’s decision, but it has undoubtedly been a factor that has complicated the certainty of these models and their real usability for day-to-day tasks.
One of the most important and strongest steps that Apple and its artificial intelligence plan have taken is the inclusion of hardware that has the specific capacity to process artificial intelligence elements.
In this case, since the introduction of the A11 chip, an artificial intelligence engine, known as the “Neural Engine”, has been included in its engineering. This is nothing more than hardware dedicated to neural network processing with the capacity to process up to 600 billion operations per second.
This artificial intelligence engine is used for Apple’s native artificial intelligence or machine learning tasks, with a certain reserve for external developers due to compatibility with previous versions of devices.
With the introduction of the M1 chip or what is known as Apple Silicon, the same Neural Engine technology is integrated to be able to perform artificial intelligence tasks more efficiently, but this time inside a computer and not in a portable device such as a phone or tablet.
Although these types of benefits cannot be accessed directly, Apple has released a library called CoreML with the ability to generate, use and implement artificial intelligence models.
This ultimately makes use of the neural engine, however from a software point of view.
This has proven to be very useful for application developers to be able to carry out processes and functions in applications that would be very difficult to carry out without access to these libraries.
What can we conclude?
Apple’s leadership in artificial intelligence issues is highly debatable, since its scope is not clear to all developers and users, and as is customary for the brand, the technology will not be open at the hardware level in order to be fully exploited.
What we can intuit is that the process will be slow at the level of functionality in our daily lives, however, the constant use by users and feedback from the same devices can generate enough information for artificial intelligence to become the accessibility center for all devices, from a Mac to an Apple Watch.