Sensors Expo 2017

Kalonji Bankole
Kalonji Bankole
Published in
11 min readJul 8, 2017

The Sensors Expo and Conference is one the largest emerging technology focused events in the world, and has been around for nearly 3 decades. It was located in downtown San Jose, CA this year.

“From design engineers to startup entrepreneurs to corporate management of the industry’s largest semiconductor companies, Sensors Expo & Conference attracts the engineering professionals that are involved in the design, development and deployment of sensor technologies.”

According to the organizers, there were said to be a headcount of nearly 6000, with attendees ranging from nearly all engineering branches (electrical, mechanical, chemical, aeronautics), health care practitioners, scientists, and more.

The conference talks were split into the following tracks

  • Energy Harvesting & Power Management
  • Emerging Technologies
  • Flexible & Wearable Tech
  • IoT & Wireless
  • Measurement & Detection
  • MEMS & Sensors
  • Novel Sensor Applications
  • Optical Sensing & Detection
  • Sensors & Embedded Systems Design
  • Sensor Data

The event started out with a keynote by Ori Inbar to outline the potential benefits and use cases of Augmented Reality. He started by talking about the recent growth in popularity in AR, referencing apps such as Pokemon Go and Snapchat filters. He then referenced how widely adopted smartphones have become, and how much time the average human being spends looking down at their phone. However, since being immersed in a phone screen is not as natural, we will ultimately transition to smart glasses, as they’ll allow us to benefit from technology while still being engaged in our environment.

Ori highlighted a quote by Elon Musk, saying that for the majority workers to avoid going obsolete, they’d have to leverage technology to their benefit, and “achieve a symbiosis between human and machine intelligence”.

Of course, we’re not yet at the point where humans can plug in to supercomputer to enhance their abilities like Professor X, or throw on an Iron Man suit, but Augmented Reality can definitely serve as a starting step. It has the potential to enhance the skill of workers in various ways. especially when direct communication is not possible. For one, it can make training and onboarding much easier, as new hires can tour the site and equipment at their own speed. For example, it can be used to help new hires find their way around a site, or become more familiar with complex equipment by highlighting and describing each module.

When coworkers, classmates, playmates wearing smart glasses are performing better than you, can you really afford not to wear them?

In understaffed or low-resource medical centers, trainee surgeons can be connected remotely to mentors for guidance. AR can also be beneficial in the consumer space as well. Cimagine has released an application that allows for users to see furniture in various rooms before having it delivered. Oculus is leveraging AR to enhance video chat by rendering participants in the background.

.AR view for Field Service Technician

When it comes down to it, these AR solutions are all about improving collaboration and communication.

Ori concluded by saying that we ultimately can’t fight automation, and that some jobs will become obsolete. But AR will help empower workers and give them a fighting chance by allowing them to become more efficient and be trained more quickly.

The first AI talk of the day was given by Modar Alaoui, who spoke on the role of deep learning within image and facial recognition. He is the CEO of Eyeris, the most widely used emotion recognition software.

He started out by introducing himself, and giving a few mind blowing stats, such as that there are 22 billion sensors/IoT devices, and that 1/3 of these devices are cameras. He also projected that 80% of all data will be images within 5 years.

He then ensured that the difference between facial analytics and facial recognition was clear. Facial recognition picks out specific facial features for identification and even authentication, while facial analytics tracks and analyzes the subtle changes that form an expression.

In 2013, his company began experimenting with deep learning using neural networks. They began with convolutional neural networks, which is the most widely used deep learning method for such analysis.

However, this proved challenging, mainly because there wasn’t enough publicly available data for training the neural networks. They found that there were a few images on Imagenet, but they were not classified by age, gender, emotions, or any of the other characteristics they were looking for. So, Modar and his team set out to create their own dataset and collected 3 million images and videos of thousands of people spanning from various races, genders, and age groups. They were sure to collect images under different lighting conditions and scenarios, such as a subject driving, interacting with their loved ones, shopping at a retail store, etc. They Eyeris team wanted to take as many variations of images as possible, so they used a wide variety of cameras and devices, including IR and 3d. Ended up creating one of the largest publicly available datasets in the world.

After the dataset was populated, they continued by training the algorithm to identify 7 primary emotions, which have been agreed upon by the scientific community to be universal among all cultures, age groups, and other social demographics. Initially, results were simply grouped to be positive or negative, and then classified as one of the seven universal emotions.

7 Primary Emotions

After being able to identify emotion from single photographs, the team looked into analyzing motion of the head and upper body to detect common gestures such as shaking or nodding the head, folded arms, leaning backwards or forwards to indicate alertness, and so on. These additional gestures increased the confidence of the algorithm results.

Currently, the primary application for these solutions are in the Automotive space, as these algorithms have the ability to detect a driver’s alertness, distraction, restlessness, fatigue, posture, etc. Upper body analysis can even determine if they have both hands on the steering wheel.This will enable cars to alert people or even take action before they nod off or get distracted. Also, the analysis can be performed on the occupants as well as the driver.

The secondary space they’re pursing are “social robotics”, which is essentially robots that interact with humans and each other. A few examples that have been implemented so far are Softbank’s robot “Pepper”, which is used to assist airport travelers, and robotic security guards.

Some of the bigger challenges encountered by Eyeris were to identify non-frontal head poses, and to be able to customize the algorithms to work with different skin tones and lighting conditions.

Fybr is a company that specializes in creating IoT solutions for industrial and city spaces. They originated as a “smart” parking company, but gradually branched out to track additional types of utilities such as gas and water.

While deploying these sensor networks in the field, they found a variety of challenges with getting the sensors to reliably work with constrained resources, whether that be limited power, RF interference, etc. Matt emphasized the importance of security, and that it cannot be an afterthought, it must be a priority from the very beginning.

He mentioned quite a few interesting challenges his team has encountered in the field. For example, in NY, SF, DC, and other densely populated cities, there is the issue of RF interference, vibration, and noise which can cause false readings and restrict connectivity. Once they were trying to setup a system in DC near the J.Edgar Hoover building, and couldn’t determine why their sensors weren’t able to communicate with each other. After bringing an engineer out to the field and poking around, they were approached by a few gentlemen in black suits and earpieces who kindly inquired what their business was, and revealed that there were several signal jammers in the area. In another case in DC, they found that the sensor readings closer to the ground would go haywire at seemingly random points during the day, and stop at night. A bit of investigation revealed that it was due to the subway train passing directly 30 feet below their deployment.

Fybr’s CTO Mrinal Wadhwa coined the term “IoT networks”, which is essentially a large scale collection of interconnected sensors that can be analyzed in real time, and cited a smart city deployment as an example. Cities deal with all types of measurable issues, such as traffic, water/air quality, efficient energy usage. And since most cities are generally dealing with a rapidly growing population, IoT is gaining leverage as a way to greatly increase efficiency of day to day operations. Once an IoT network is established, and is able to monitor infrastructure such as traffic lights, sewer lines, traffic, water mains, etc, one can create algorithms that will respond to certain scenarios at a much faster rate than humanly possible, and even prevent certain issues from occurring in the first place. It’s also useful to be able to identify resource types (open highway lanes, parking spaces), which of these resources are limited or in high demand, and how to determine how much of each resource is available at any given time. Instead of analyzing a single street light at the time, the algorithm should be able to scale efficiently to analyze the entire network of street lights. In general, they’ve found that most deployments pay for themselves within just a few years.

Marion Le Borgne then spoke with us about her work using Numenta, an open source Hierarchical Temporal Memory (HTM) algorithm, which is a relatively new type of neural network. HTM is primarily used for anomaly detection. Since there are a rapidly increasing number of sensors being deployed on the field, there is a demand for a common model to analyze large datasets, even if the data is raw and unlabeled.

Technically, HTM is different from deep learning and machine learning, and is based on biological properties and neuroscience. It mimics the behavior of the neocortex, and is able to process unlabeled, raw data. the patterns are organized as a hierarchy. HTM also does not require training. Detected anomalies can be classified as either spatial or temporal, spatial being a single or a small amount of points, and temporal being a long running set of anomalies.

One of the implementations Marion tested was modeling taxi demand in NY. This was accomplished by recording the total count of passengers every half hour. The team detected several anomalies, and found that the major temporal anomalies were a result of the NY marathon, Thanksgiving, NYE, and a major blizzard. Other potential use cases for HTM mentioned here were tracking stocks, vehicle paths, server monitoring, and human behavior.

Charles Greene represented Powercast, which is a company that is striving to utilize power harvesting. This can potentially eliminate the need for batteries, or at least battery maintenance because devices will be able to “trickle” charge throughout the day. This concept is perfect for pacemakers, sensors, smart watches, and other low power devices. This works by leveraging receivers that can convert RF signals to DC which can then power the device directly or store in a capacitor/battery.

Powercast Transmitter and Receiver modules

There are currently three primary categories of RF Power Harvesting:

  • Intentional — The most reliable and efficient method, this occurs when customers purchase a Powercast transmitter that is dedicated to powering devices in the area, and consistently broadcasts a 915 MHz signal up to 50 ft, which can deliver 1–3Watts
  • Anticipated — This method assumes that enough power can be captured by the signal emitting devices in the area such as phones, routers, — — — harvest RF energy from devices that are constantly emitting energy, but are dedicated to other purposes, such as WiFi routers, radio towers, cell phones. These generally radiate much less energy, WiFi routers for example generally deliver 50–100mW. However, this requires less hardware since the power is coming from devices that are already in place.
  • Ambient — These are emitters where the distance and time used are somewhat unpredictable, such as walkie talkies, microwaves, and video game controllers.

Power harvesting can be great for wearables, as they can be trickle charged overnight, and because the battery and charging port can be completely sealed and shouldn’t have to ever be accessed or removed. Also, the RF transmitter should be able to bulk trickle hundreds of devices at a time, as long as they are within a close proximity. However, it’s worth noting that devices can “steal” power from one another, so the devices that are closest will absorb the majority of the emitted signals and charge faster than the devices behind them.

Motus is a company that embeds intertial sensors in wearables such as sleeves and compression shirts for athletes. They began with creating wearable sleeves (mTHROW) for NCAA baseball pitchers to analyze their throwing form, and to make corrections as needed. The primary purpose of this is to prevent injuries, as 11% of pitchers tear their shoulder ligament at some point during their career.

They’ve also created a machine learning algorithm to track each pitchers performance on the field, and provide feedback and schedules that recommend an optimal amount of pitches per day, as well as rest time. A similar algorithm is being created to analyze the movement of NFL quarterbacks and Cricket bowlers.

The initial devices contained a single inertial sensor but upcoming releases will have arrays of sensors to allow for full body assessments. Since throwing power travels from the ground up, it’d be useful to also have sensors on the knees and hips. Motus is also in the process of releasing a compression shirt with sensors on both biceps and shoulders.

Another goal is to measure symmetry, stability, and vibrations to detect microfractures in each athlete’s joints.

Motus is also aiming to improve testing for concussions. Whenever a concussion is suspected in an athlete, there’s a baseline test to check memory, motor function, and eye movements. These are measured by the human eye, which is obviously not as precise as we’d like. So, Motus is in the process of using these wearables to test motor function, by determining when athletes are swaying slightly.

This can also be applied in the workplace, as so many workers suffer from back, rotary cuff, and knee injuries. Preventing work related injuries early will benefit both workers and employers. This product is currently implemented as a safety vest with 4 sensors (pelvis, neck, shoulders). The vest can be worn all day, and track how much time each worker spends standing, sitting, squatting, and lifting. It can also detect their spine angle, orientation, and estimate the weight they’re lifting.

The sensors embedded in the wearable communicate via BLE, and collect data throughout the day. A gyroscope, and microcontroller are placed in a small pocket in the sleeve, and can be removed when the sleeve needs to be washed. The software detects when there is a sudden spike in movement to determine when a throw is occurring, and samples the sensor values at 1000Mhz until the movement subsides. One major issue is that the sensor clocks tend to drift, and syncing them to correct offsets results in more battery usage. The results are validated using 3d motion capture devices and third party research labs.

--

--

Kalonji Bankole
Kalonji Bankole

Kalonji Bankole is a developer advocate for IBMs emerging technology team. Day to day, he works with open technologies such as Ansible, MQTT, and Openwhisk