2017 apple keynote event at apple headquarter

Imagine, how would you feel if you are a handicapped person who is not able to use fingerprint biometric authentication technology to unlock the gadget for interacting with it. Apple company just solved this whole issue with the introduction of Face ID, which is biometric authentication technology which will unlock the device by scanning your face. In the article “Qualcomm’s new depth-sensing camera is surprising effective,” the author Cherlynn Low The author explained how the active depth sensing in the cameras’ work. The author told us about the first module which is most expensive module, RBG camera. It consists of infrared illuminator IR camera, and at least a 16 — megapixel. First the light in the dot pattern will be bombarded by the illuminator on the subject, then those dot patterns are searched and then read by the IR camera. The depth sensing technology also allows us to check the distance between the user holding the camera and the subject. But the question you might be asking is how does the distance measured so easily. The answer to the question is explained by the author when he said, “By calculating how the dots wrap over a subject and the distance between points, the system can tell how far away something is.” Since we are using the IR cameras, the technology can be used in the dark too.

The use of this technology is observed in Samsung smartphones as well, however, the only demerit the company suffered with was that their face recognition technology did not have depth sensing cameras, so if you have a Samsung phone which will first save your photo and then scans your face and compares it with the saved photo. From the article, “The Smartphone’s Future: It’s All About the Camera,” author Brian X. Chen said,

“All a thief would need to do to fool the system was hold a photo of your face in front of the camera to access the smartphone.”

This can be life threatening to a user’s personal data as the thief can also be none other than the owners friend. Despite that, there are new camera sensors, it marks the face of human with infrared dots to analyze and scans in 3D, once it does all that work then it allows the phone to unlock. However, some people argue that depth sensing technology has the future in only smartphones, but depth sensing technology has opportunity in other fields too.

The use of face depth sensing cameras as a part of face recognizing technology is the future in biometric authentication technology. The best example for this can be that, suppose if you study in a university like George Washington University, most of the doors are locked and can be only unlocked by using your student ID card every time. Now, what if you forget your card at home or in car, you probably have to go back to the car or to your room and will end of getting late for class. There could be worse situation also, what if your student ID card gets into wrong hands? It could be either your room partner, your teacher, or a stranger whose intention is wrong and can have access to the whole university with your card. We can implement these depth sensing cameras in front of the doors so it can scan the face and let you allow to enter the room. Apple company made this technology so much better that even if someone wears your real looking face mask, the one used in Hollywood, it will still understand that it is not you. The depth sensing cameras can’t only be implemented in the university or any colleges, but they can be implement in hospitals, schools, rooms for meetings only for members. The Nest company which makes security cameras and door locks, made a video doorbell which will first recognize if it’s a human, if it really is then it will scan who the person is, after then if a person’s face is saved and recognized by the camera as a part of family member then the doorbell will allow the smart-lock to open the door for you. If the person outside is stranger, then the smart door bell will send you a notification on your smartphone about some unknown person in front of your door.

Augmented Reality is another field in where depth sensing technology has a lot of potential. Microsoft invented an augmented reality headset and named it “Microsoft HoloLens.” It also works with depth sensing technology in which depth sensing cameras see and understands the environment. It is amazing how the camera scans the environment and figures out what object is where and how far from you. It will also tell which hand a person is holding out for gestures and help track person’s head movements with help from other sensors on the device.

Microsoft hololens —

The device is still under process but if anybody who is a developer, who develops games and apps, will get it for price of $3000. After developers develop some of the games or apps, then the Microsoft company will release it commercially for $5000. The headset is yet to be released in 2019, and there could be lot of other possibilities for the use of that depth sensing camera in Microsoft’s HoloLens. Augmented reality is going to be very popular among everybody. Last year when Pokémon Go got released. It was very innovative for people to see animated cartoons appearing in real environment on their phone’s screen. Similarly, with the use of this headset, one can use virtual objects in real life. For example, when someone will be wearing HoloLens, he/she can watch a movie without television, also he/she can resize the screen of the movie. A person can also place a calendar on the walls, weather information on their desk, and even a virtual dog on a carpet. He/she can only do this and see all these virtual things while wearing the headset.

Despite the use of depth sensing cameras in Augmented Reality and face recognition, there are other possibilities too. In the article, “Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field,” the authors Matt Strickland, Jamie Tremaine, Greg Brigley, and Calvin Law talked about how depth sensing cameras are useful in medical and operating field. As the world is getting smarter in the modern century, there are lot of technologies coming in every field. Matt Strickland and his team are the first ones who thought of using depth sensing cameras from a device called Microsoft Kinect from Microsoft company in operating room to provide surgeons a touchless and gestural interface with the computer devices. In operation room, a lot of x-rays, MRI scans and other scans related with the patients’ health and medical record are required, which are displayed on computers via picture archiving and communication systems (PACS). It is very difficult and time consuming for surgeons to go back and forth to change the slides of viewing angle of the photos from computer, that is why researchers tried using depth sensing cameras along with infrared cameras to change the slides and viewing angles of x-rays from gestures while performing operation.

Devices such as these cannot be built within hours, a lot of research has to be gathered, and so many tests are performed before putting it in an actual operation room (OR). The study was conducted in three stages, first researchers designed a system based on specifications which were predetermined, then the creation of gestural interface in a simulated setting, and finally the pilot-testing in a set of live OR procedures. For first stage, the specifications were built with the help from surgeons, and nursing staff since they will be the people who will use the device. These specifications included, that the “gesture recognition must

be robust and reliable”, and “the system should be easy to integrate into existing ORs with minimal distraction, training or human resources.” For second stage, they implemented the device onto a portable cart, which can easily be moved around in the OR wherever surgeon wants. The device has an IR camera unit to capture the scene in front of it, image processing unit which helps to interpret the scene and extracts information about the person’s position and motion, feedback display, and PAC’s equipped which is connected to hospitals internal network and quipped with license software to display reports and scans of patients.

© 2013 Canadian Medical Association

The image on the right side is the final product having the IR camera from Microsoft Kinect attached at the top of monitor.

After everything was implemented, preliminary tests were taken in unused OR with some surgeons, who performed some gestures. In the end, they concluded that “the gesture libarary was generally intuitive and easy to learn, meeting the previously described specifications. Testers quickly became comfortable with the system, and all were able to master the simple set of gestures within 10 minutes. The participants’ strong recall of the gestures during subsequent testing supported the intuitive nature of the system.” After all the testes were performed in an unused OR, they finally took it for pilot study where they used in 6 operations. In all 6 operations, all the surgeons used the gestures many times to scroll for animation and for changing the CT/ MRI scans to getting different views. Out of these 6 operations 2 were laparoscopic which requires darkness in the room, and it was observed that the system worked extremely well in dark conditions too.

In the other article “Kinect weighs astronauts just by looking at them,” author Jacob Aron explains how microsoft kinect can help humans to measure their weight in space. Microsoft Kinect which essentially consists of bunch of depth sensing cameras along with IR cameras. If a astraunaut goes to a space mission, he/she can loose a lot of weight within days if not taken care of health. Therefore spending two hours excercising per day and checking weight is one of the important thing for astraunaut. The author explains how William Thornton in 1965 solved problem of monitoring weight in space, since traditional scales do not work in orbit. However currently we are still using similar techonology from old times. The current device is a lot bulky and takes a lot of space, thus putting bunch of depth sensing cameras aka microsoft kinect can help with providing a lot of space in the aircraft and since these are not bulky, astraunauts can take something else useful with them. Author Jacob says, “Velardo and his colleagues at the Italian Institute of Technology’s Center for Human Space Robotics in Torino used Kinect’s depth sensing ability to create a 3D model of an astronaut. Then the team ran their calculaaations using statistical models that links weight to body measurements based on a database of 28,000 people.” Author also said the calculations are 97% accurate, even the current method gives error of around 3 kilograms which is what we get from microsoft kinect. In the end of this article, Jacob said that the kinect system has yet to be tested in space, since it could be expensive as they have to first take out the current weighing systems and install them in a space shuttle.

In the end, technology is evolving really fast, the people who are assuming that depth sensing technology is only used in smartphones are wrong. It can be useful in other fields too. There is whole different aspect of using them. Microsoft kinect device which already has depth sensing cameras, allows us to interact with the gestures in the air, and using the idea of gestures for medical purpose is a smart move done by the researchers. The depth sensing cameras can even help the astronauts for weighing them in outer space. Since calculations were 97% correct which result the error of just 2.7 kilograms which we usually get from current system in space shuttles, then why do not we just start putting the kinect sensors in space shuttle. This can be expensive because first we have to take the current weighing system out then install the new one, but there are more benefits of installing the new method. If astronauts will use the new method they can use the space for other resources such as they can carry extra food or it could be anything. Also if we got rid from current weighting system, the space shuttle could be little lighter which can save a lot of fuel from burning. All these possibilities can be completed by only depth sensing cameras.

Work Cited

Microsoft HoloLens: Everything you need to know about the $3,000 AR headset. May 20, 2017. Online

X. Chen, Brian. The Smartphone’s Future: It’s All About the Camera. The New York Times, Aug. 30, 2017. Web.

Strickland, Matt et al. “Using a Depth-Sensing Infrared Camera System to Access and Manipulate Medical Imaging from within the Sterile Operating Field.” Canadian Journal of Surgery 56.3 (2013): E1–E6. PMC. Web. 30 Sept. 2017.

Aron, Jacob. “Kinect Weighs Astronauts Just by Looking at Them.” New Scientist, vol. 212, no. 2844/2845, 24 Dec. 2011, p. 16. EBSCOhost,

Cherlynn Low. Qualcomm new depth-sensing is surprising effective. Aug 15, 2017. Online

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.