A recent publication in the Optical Society’s journal of Photonics Research demonstrates how certain computing tasks (referred to in the article as “artificial neural computing”) may be performed using a specially prepared sheet of glass (“nanophotonic medium”). For this paper, a neural network was codified into the glass by iteratively introducing impurities, effectively “training” it to the desired outcomes of the task — in the case of this research, to identify and recognize graphical numbers displayed to it.
As far as optics is concerned, this is an extraordinary accomplishment (at least it seems that way to an outsider like myself!). The applications of such a technology, if it reaches a state where it can be created quickly and to the needs of the moment, include a wide range of tasks such as facial recognition and fast, highly specific computation in low-power environments.
The limitations to this technology do make it likely it will apply to a small-ish domain, since without completely reconfiguring the glass (if such a thing is possible) the neural network remains applicable to the one task it was created for and no other.
Some news articles have seized on this achievement and deemed it an artificial intelligence. I’m no expert in the field, to be sure, but this titling worries me. Is every neural network an AI by default? Or is this really a highly complex, very specific tool? It can be calibrated, but only once. It can compute, but only in the sense that a mousetrap can compute when to spring — though on a completely different scale of complexity.
When you take the fundaments of such a system — any such system — you have a machine which takes an input via some sensing ability, performs computation on that input, and produces an output in the form of an action or record. In the technical sense, such a machine is sufficient to be called an intelligent agent, an AI. Whether the computation occurs by scattered light passing through a glass or by electricity flowing through a circuit — or for that matter, by gears turning — would seem not to make a difference to the fact that you have input, computation, output.
Granted, some systems may store the information in gears, relays, or platter, while a sheet of glass cannot (true for the most part; see for example https://www.sciencedaily.com/releases/2018/07/180711093109.htm). Lack of storage does make some difference in terms of how we think about a computational system, but may it still be an AI?
I don’t know, and I doubt if I’m qualified to decide the answer to this question, but I do know a trend when I see one and like all trends, it needs temperance.
It’s wonderful that technology has become so popularized and exciting, but stuffing achievements under uncertain headings to fit a trend may not really benefit anyone (except perhaps publishers and academics, who both need the funds).
Referring back to the news article mentioned earlier, I was particularly struck by how certain aspects of this achievement were stated. If you read the line stating “It can also recognise, in real time, when a number it is presented with changes” you may be justifiably impressed that a sheet of glass can detect numbers, and skip by the minor detail that it would be nearly impossible for this process to happen in anything other than real time.
How exactly would a sheet of glass delay processing the light passed through it? There are ways to “pause” light in glass, but that has not been applied in this research. Similarly, it should be fairly obvious that, given a system that codifies an image by reflecting light, replacing the image displayed with another image that the system is also capable of recognizing would produce the same effect. It would be outright impressive if it didn’t, as you’d somehow have to establish a trigger in the glass to respond to the first image shown such that future images, even if the system could theoretically recognize them, would be ignored.
But I rant. The research in this domain is impressive and useful, and perhaps it’s forgivable to somewhat misrepresent it in an attempt to give it a wider audience.
Yet I can’t help but worry when basic realities of physics are glossed over in the interest of sounding impressive, or when important questions are blithely ignored in subjugation to a fad.
The original publication itself never mentions the term “AI,” though most citing news sources do, including the Optical Society’s own site. So maybe I’m wrong; maybe I’m being pedantic. Maybe the researchers had their own reasons for not using the term. But AI is a critical topic at the forefront of our societal picture and the implications shouldn’t be understated.
To quote Confucius, “The beginning of wisdom is to call things by their proper name.”