ZebraSense: Giving Smart Textiles a New Sense of Direction

Tony Wu
ACM UIST
Published in
9 min readNov 12, 2020

This blog post is written by Tony Wu & Nicholas Gillian. It is a lightweight summary of our team’s publication in the UIST 2020 Conference Proceedings with some additional visual flair that we weren’t able to include in a static PDF.

Since the conception of Project Jacquard within Google ATAP in 2014, the team has been focused on bringing interactive textiles from research labs to the general public along with our brand partners across the world.

Products with integrated Jacquard interactive textiles out in the world today!

Smart textiles are of particular interest to us in our ultimate goal towards ambient computing because they are ubiquitous, culturally significant, and are within our hands’ reach almost constantly: they are the perfect surface for convenient touch interactivity.

While multiple products with Jacquard technology are out in the world, one way in which we have wanted to expand our platform is the realization of smart textiles’ natural potential to be interacted with from both sides.

In contrast to digital touch devices we are accustomed to today, which are flat, rigid, and restricted to single sided touch sensing by the form factor; textiles are conformal, flexible, and often have surface directionality in their use case (e.g. a pocket). These properties make them a perfect candidate to leverage both sides of the material to create novel touch interfaces.

Textiles often have surface directionality, e.g. the inside vs. outside of a pocket.

With this opportunity in mind, we developed a novel yet simple capacitive sensing structure on textiles which provides interactive fabrics the ability to differentiate between touches made on both surfaces, and we named it ZebraSense.

The construction and sensing principles of ZebraSense are simple and intuitive, here we show an example with a traditional 1D slider structure which we augment to have ZebraSense capabilities.

Take each sensing element (e.g. conductive yarn) and alternatingly place them on two separate planes which are just slightly offset in the z-axis by ∆z. By creating this structure, the slider still maintains its original ability to track hand motion in the y-axis with all the sensing elements but can now also differentiate from which direction in the z-axis the touch was made due to the ∆z separation.

Because one layer of sensing elements is slightly farther away from the hand than the other depending on the direction of touch, and capacitance is inversely related to distance, we can leverage the small but measurable differences in their capacitive responses to figure out which side the touch is made in the z-direction.

A very simple formula one can apply to the data stream to determine z-direction is just simply to add up the measured signal differences between each even-odd sensing pair (even and odd refers to the numerical indices that they are labeled within the graphic: top layer is even, bottom is odd). The result of this summation is positive if the touch is from the +z axis, and negative if -z axis.

The basic formula to determine which z-direction the touch was made.

This basic formula will work for most touch interactions but it does have some limitations in more complex cases, so let’s take a look at an example of a challenging on-body direction detection scenario where the sensor sees strong signals from both surfaces simultaneously and how we use more robust signal processing and machine learning techniques to address that.

It is interesting to note that cases where ZebraSense comes into direct and consistent contact with the body, such as yoga pants or tight-fitting sports attire, are actually an easier sensing case than instances where ZebraSense is embedded into a relaxed or loose-fitting garment, such as the sleeve of a casual jacket. The reason for this is that the capacitive sensor will automatically calibrate itself for consistent contact with the body and only reacts to changes in the sensor signal, such as an interaction when the hand approaches the sensor. This is exactly the scenario we see for tight-fitting attire. Alternatively, in the case of loose-fitting clothing, any gestures performed on the exterior surface of the garment may push the sleeve towards and make contact with the arm, resulting in simultaneous signals coming from the interior and exterior sides of the sensor. This combination of simultaneous dynamic changes on the front and back of the sensor results in too much ambiguity to simply classify which side of the sensor the interaction was made by using the sign of kz.

It is trivial to determine z-direction on tight fitting apparel with the basic kz formula
However, signals on loose fitting apparel can often be ambiguous and requires more advanced techniques.

To address this challenge, we can use a data driven approach to differentiate between touch interactions on the front of the sensor and body contact on the back of the sensor. This approach takes advantage of the following three observations:

  1. While we can’t simply use the sign of kz to indicate front or back interactions — there may be a more appropriate threshold that does robustly separate the two surfaces and this threshold can be learned from data.
  2. Values of kz close to zero provide the most ambiguity, due to the front and back lines cancelling each other out. Nonetheless, kz values with a large magnitude provide good confidence on the directionality. We should therefore weight large positive or negative kz values more than smaller values that are in the ambiguity region close to zero.
  3. An interaction such as a gesture is typically performed over several hundred milliseconds and therefore spans a number of consecutive samples from the sensor — we can increase the confidence of a front-vs-back interaction by integrating predictions over numerous samples.

We combined the three observations described above into a simple yet powerful heuristic algorithm that predicts the confidence of an interaction occurring on the front or back of the sensor. The heuristic is defined by the following equation:

The equation above estimates the probability of an interaction occurring on the front of the garment given a time-series of T corresponding kz values, such as over the course of a gesture being performed. The numerator in the equation above sums over all values of kz that are above a custom threshold value (𝜏), where f(kt)is a nonlinear function that rewards values of kz with a large magnitude that is above the threshold. The denominator in the equation normalizes the estimation across both front and back interactions.

The helper function f(kt)is as follows:

which returns zero for any value of kz that is below the threshold , and exponentially larger rewards for values of kz that are greater than or equal to the threshold. The threshold can be learned directly from data for specific use cases to optimize the number of gestures detected or optimize the precision of interactions occurring on the front of the garment.

The figure below illustrates how the equations above can be applied to determine which side of a garment a swipe gesture is performed using ZebraSense.

An example of the direction classification heuristic for a swipe-right gesture (top) performed on the top surface of the sensor. The first and second plots show the capacitive signal and corresponding signal sum as a function of time. The third plot shows the sum of even and odd lines, illustrating the ambiguity between which grouping of lines is dominant. The fourth plot shows kz and a custom threshold that was learned from data for this specific use case. The fifth plot shows the cumulative confidence for the top (orange) and bottom (blue) of the sensor, clearly indicating the confidence in the gesture being performed on the top surface by the end of the gesture.

The figure below shows the effectiveness of the direction classification algorithm to distinguish front vs back interactions on tight, relaxed, and loose fitting garments. It can be seen that the algorithm successfully predicts the interaction side with 98%, 94%, and 82% respectively. See the paper for details on the study.

Evaluation results for gesture vs background motion discrimination across five test users for three simulated garment fit test conditions (tight fit, relaxed fit, loose fit).

In addition to using ZebraSense for differentiating which side of a garment an interaction was performed, it can additionally be used to sense multi-touch gestures like more traditional capacitive sensors. We evaluated that ZerbraSense can achieve gesture-classification accuracy similar to that of conventional multi-line capacitive yarns (i.e. array with a Δz of zero), such as those used in all the products we’ve shown at the top of the post; all of which feature Jacquard integrated technology. To achieve this, we tested the performance of ZerbraSense using a machine learning classifier that was previously trained on regular non-ZerbraSense data, using a sensor with the same topology and materials, except for Δz=0. Our hypothesis was that if the gesture model generalizes to the ZerbraSense sensor data, this would indicate that gesture recognition is not degraded by the Δz offsets of corresponding odd-even pairs of lines.

The reference model used for testing was a convolutional neural network trained on 20K gesture instances from 100 users performing gestures on a conventional 10-line capacitive sensor embedded in the cuff of a denim jacket (gestures: swipe left, swipe right, double tap, and full hand cover). No ZerbraSense data was used to train or tune the model. The model was tested using 240 test gestures collected from 5 participants in a relaxed fit test sleeve with an integrated ZebraSense sensor that closely resembles the original jacket-worn use case. Despite not being trained on ZebraSense data, the neural network model achieved an average accuracy of 88.9% across all the four gestures, with double tap, swipe out, and full hand cover achieving a recall of 0.98, 0.983, and 1.0 respectively. Full details of the study can be found in our paper.

While this study was limited to a small number of test participants in a controlled environment, the results are exciting as they indicate that ZebraSense can be used to both detect direction & multi-touch gestures, even for complex interaction cases, using advanced machine learning techniques that have already been launched on embedded hardware and using similar conductive yarn technology successfully integrated into multiple consumer products.

We’d like to wrap up by highlighting that not only is ZebraSense simple to understand and prototype, it’s also ready to be manufactured at scale with existing industrial weaving processes, which makes it attractive for productization. Enjoy this bonus clip of ZebraSense being woven in a factory:

ZebraSense being woven on an industrial loom.

Thanks for taking the time to read our blog post, if you’d like to find out more about ZebraSense we encourage you to check out our talk and paper for more in-depth details! Also, since ZebraSense is so easy to prototype (you can make it with some wires and a piece of cardboard in between!) and its application can extend beyond the scope of smart textiles, we encourage you to make your own double-sided touch interaction demos with ZebraSense and share it with us and the maker community!

Paper Citation

Tony Wu, Shiho Fukuhara, Nicholas Gillian, Kishore Sundara-Rajan, and Ivan Poupyrev. 2020. ZebraSense: A Double-sided Textile Touch Sensor for Smart Clothing. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST ‘20). Association for Computing Machinery, New York, NY, USA, 662–674. DOI:https://doi.org/10.1145/3379337.3415886

Acknowledgements

We’d like to extend our gratitude to everyone involved, both internal team members and external partners, in making this research and publication possible. And a special thanks goes to James Provost who created these delightful technical illustrations and animations for us which really made this post shine!

--

--

Tony Wu
ACM UIST
Writer for

Hardware R&D @ Google ATAP. Thriving at the intersection of textiles and electronics.