Zi-Fi™: Vision for a unifying networking standard for Gen Z toys


SuperSuit™, the new connected wearable gaming platform, has been a playground for budding technologies, and we take curtains off Zi-Fi™ and ENGaGE™ in this blog post.

Zi-Fi™ is an ad-hoc standalone wireless network that sits in between BLE and ZigBee in terms of range and functionality, is secure and robust in comparison to traditional R/C protocols, is extremely low powered. It is aimed at becoming a standard for the new age of intelligent connected toys. The need for a new network arose after seeing numerous gaps while reviewing existing radio protocols available to the growing domain of connected toys. In the article below we outline these gaps, and take initial few steps in sharing emerging vision for Zi-Fi™, with the hopes of sparking interest in the larger ecosystem. Ideal flag-post for this journey would be the opening up of a mature stack to a larger audience for adoption and integration.

ENGaGE™ is the first natural gesture processing technology for embedded gaming. It recognizes macro gestures done by players while moving across a 3D space, to trigger actions in a game environment and control bots. ENGaGE™’s Machine Learning subsystem learns complex new gestures in the first trial.


Wearables has been the buzzword in marketing circles for a few years now and has been the stepping stone for the next leap of growth in connected embedded systems. Smartwatches have been ubiquitous with wearable computing, and AR/VR is definitively the next major wave. Between these two we are inundated with devices of all forms and factor, with use cases ranging from frivolous to luxurious, mostly unified by a common standard of BLE. A connected personal wearable device is built around a SoC that implements Bluetooth Smart, integrates some sensors and haptics, and leverages your smartphone’s connection for access to the cloud. However, we have seen products and use-cases which have truly played loose and free with words ‘wearable’ and ‘connected’; we at SuperSuit are working on products and ecosystems of a similar kind but with a fundamental design approach.We will use our learning and experiences to shed some light on the industry as a whole.

The reason for huge interest and investments into these technology developments is the potential to free up our hands for tangible interaction, allow true mobility in day-to-day computing, and allow for better social interactions while we engage with technology. The first wave of well developed AR/VR devices (Meta, Magic Leap, Hololens, Oculus), have shown great potential to herald us into next era of computing, the same way smartphones proliferated mass computing. Meta has been very vocal about what they call neural path of least resistance to computing, and trusted reviewers have had only good things to say about the technology and it’s potential.

We, at SuperSuit, are working on technologies, we believe will change the way we play games. We are a games company at heart, and over years have developed a niche for children’s space. We are trying to fundamentally alter the way technology confluences with lives of children, with an attempt to offer the best of gaming, and free play. We are launching our flagship product SuperSuit shortly; SuperSuit is a wearable gaming platform designed to enable real world multiplayer gaming, without needing any external infrastructure or device. We debuted the concept at CES 2016 and have since polished the ID and underlying technology to vastly improve the experience. Out of many innovations built into the enclosure of SuperSuit is the proprietary communication stack Zi-Fi , built to offer frictionless network formation and gameplay among any number of players.

Over the last couple of years, we have seen a huge wave of innovation in the toys and games industry, with mainstream technologies spilling over from industrial and scientific domains to this niche industry, sometimes as pure fun products, at others under the guise of educational toys :). Sifteo, Anki, connected drones, Wonder Workshop and many others have led the way, and Machine Learning, Artificial Intelligence, autonomous systems are no more alien words to this industry. The time is ripe to start addressing this industry with its own unique solutions, rather than bin shopping at COTS markets.

Let me take you back to our drawing board, when we analyzed and benchmarked numerous COTS technologies and standards, against our requirements. We had requirements which were common across many standards but didn’t add up within any single one.

These were our basic requirements:

1. Zero friction multipoint peer to peer topology, with or without mesh.

2. No single point of failure in the network.

3. Range of ~50m in real world conditions in a cluttered environment.

4. Low power

5. Homogenous game information across network

6. Any node is able to identify loss of network, or it’s falling out of range

7. Ability to withstand nodes constantly in motion

And all this at a price point that offers the potential of scaling across the industry. I have not yet touched upon, how fragmented the toys and games ecosystem is. Every manufacturer of repute has had a proprietary solution when it comes to communication (R/C), with no standardization, no insights into benchmarks and hence no way to even consider the potential of inter-operability of products. Bluetooth is a good example of what true democratization of a standard can do for the industry. Bluetooth has been the defacto standard for all short range communications requiring untethering of devices. It started with handsfree accessories, and with the advent of BLE has migrated onto the current crop of fitness trackers and other personal devices that access cloud using your smart device.

In SuperSuit, BLE helps connect with Smartphone app, a gateway to content, games, and updates, fitness and performance metrics and the SuperSuit community itself. But, none of the other requirements are met by any version of Bluetooth. We looked at ANT+, ZigBee, proprietary modules like Digimesh, and many others. Nothing came close to meeting our feature and cost targets simultaneously. This whole exercise revealed a significant gap on canvas of toys and games market. We started by trying to solve the problem for ourselves, and along the course have realized the potential to develop a cross-platform proprietary stack that caters to a complete industry segment. Consider connected toys which can talk to each other, distribute content on every peer-peer connection they forge, add a layer of intelligence to the network, and utilize inter-operable products in more roles than one. Most of the protocols brought up in our discussion do not offer true peer to peer connectivity, even while offering multi-point connectivity. BLE has a connection oriented server-client model, along with an advertising mode with user payloads of 20–27 bytes (BLE 4.2 improves on this, but other limitations remain); ZigBee has Co-ordinators, Routers and End Devices with limited set of functionalities for each, and ANT has devices transmitting in sync for short duration bursts at fixed intervals, on different channels for different devices. The clear distinction in roles in any of these do not work out very well for us, from setting them up to recurring event based communication exchanges themselves.

Seamless connection of peers into a network (Network boundary for representational purpose only)
Bots can connect as slave to a player (Network boundary for representational purpose only)
Bots can connect as standalone players with complete peer to peer network functionality (Network boundary for representational purpose only)

We have been asked time and again why we have not just relied on mesh protocols on top of Bluetooth, or UDP based protocol on top of wifi, or just ZigBee. While there are general areas of overlap, we needed to design something grounds up to offer the combination of features from all these within the power, size and cost constraints we had. To simplify our answer, WiFi is just too power hungry, ZigBee has a single point of failure in co-ordinator node (which authenticates all joining nodes), BLE is connection oriented, and ANT is geared towards periodic sensor reading transmissions.

Zi-Fi Stack is not a connection oriented stack like Bluetooth (which requires exchange and persistent storage of keys), thereby allowing any device to communicate with any device in the network coverage range, after a session initiated by network discovery mechanism; it serves as an ad-hoc network capable of performing self-management and organization. Unlike most other use cases, network sessions need to be created and flushed after every game, and need to be done with minimal intervention, time lags and requirement of any network managers. The stack adheres to MAC definitions as per IEEE 802.15.4 standard, which allows for a robust base of specifications to build upon. Since the nodes are in constant motion and keep falling in and out of network, conventional approaches to routing as in ZigBee become too resource intensive if done frequently. We have developed and experimented with proprietary routing algorithms that allow us to circumvent packet losses to a certain limit. Any such relationships/connections for routing would be temporary in nature and flushed at the end of the session. One of the key areas of research for us is indoor localization, built into the Zi-Fi stack, which is at the center of aforementioned routing algorithms. Zi-Fi is being developed to be cross platform and we are still some distance off from offering it to developers. The same Indoor Localization experiments would unlock an additional layer of game-play with the relative location of players available to all; consider the possibilities, when arena map is populated with real world locations of the team and your opponents. While the results are promising, there is lots of work to be done! With the launch of SuperSuit, we will get the desired magnitude of deployment to speed up our refinement and add advanced features to address specific and generic needs.

ENGaGE™ is the first natural gesture processing technology for embedded gaming. It recognizes macro gestures done by players while moving across a 3D space, to trigger actions in a game environment and control bots. ENGaGE™’s swift-AI learns complex new gestures in the first trial. The challenge has been to optimize and adapt complex mathematical algorithms onto a compute-limited embedded platform, allowing the utmost level of fidelity to the consumer. Where ENGaGE™ differs from numerous other gesture recognition technologies is the platform, sensor(MEMS accelerometers) and the complexity of natural human gestures. The layered architecture from feature extraction to gesture detection has been designed to cater to the addition of future sensors and technologies to the framework. An extensive UX research on top of the engine allowed us to bring a set of gestures to the SuperSuit that intrinsically enrich the experience and performance during the game-play.


Originally published on — supersuit.io by Shobhit Niranjan