This method seems to have some overlap with light-field imaging, which can do seeming 3D-rendered style magic and post-hoc focus through mapping not just light color on a 2D surface (as most camera sensors do) but also beam direction (usually via lens arrays, as in a compound eye). 3D surface shapes can be computed from this directional light beam data through a sort of reverse-ray-casting process, just like film CG in reverse.
All that is happening here is high speed digital photography using light wavelengths that penetrate the biological surface matter, where the LCD arrays are probably acting as high speed moving apertures, pinhole-style focal points, or shutters all at the same time. If there are more than one LCD array layers operating at higher resolutions than the camera pixel arrays, at similar refresh rates, they could be dynamically changing detected light source beam directions per pixel each video frame. The net effect is a high speed and resolution 3D flat camera. Such a system would have a lot more uses than just blood-oxygen 3D state detection. The first real “killer app” might be military use cases for real-time 3D mapping through visual barriers, using longer infrared or radio waves depending on surface thickness. After these 3D-mapping flat cameras and ASICs get cheaper, they can be put on the backs of phones and tablets, or on auto body surfaces, to replace all the multiple glass lens cameras they are using for 3D mapping today.
In terms of sending signals back into the brain, I haven’t heard of any methods that change temperature or light response near neurons to induce any signal — only electrical signals that parallel the electro-chemical interactions at the synapse. That seems like it would require some kind of 3D focused RF signaling or pinpoint magnetic induction, as in DIDO or Mega-MIMO communications using large phased arrays of antennae. That may require the outer layer of this cap to be a broad spectrum Faraday cage.