Artificial Neural Patches

Ash Hashemi
Nerd For Tech
Published in
12 min readJun 12, 2021

This article describes what neural patches and patch systems are, their advantage over tradition neural network design, and why we’re looking for people to train interesting artificial neural patches for image classification. It goes over the steps to train such patches using a simple Windows tool, how to test them in the wild on mobile devices (iOS and Android) and submit them for publication review.

Background: Neural Patches

In 2006 researchers used fMRI (functional magnetic resonance imaging) and electrical recordings of individual nerve cells to find regions of the inferior temporal lobe that become active when macaque monkeys observe another monkey’s face. They found that some nerve regions are triggered only when a face is identified. And those trigger other regions which show sensitivity to only specific orientations of the face, or to specific feature exaggerations. Such regions of a neural network that are conditionally activated in the presence of certain coarse features, and then extract more finer features, are referred to as Neural Patches. And a collection of such interconnected patches is referred to as a Patch System [1]. Key here is the conditionality of activation.

There are two intriguing aspects to neural patches. The first is that they are incrementally accumulative. A patch system can gradually increase in size by adding more and more patches that are triggered by, and themselves trigger, other neural patches — extracting more and more details along the way. The second is that they are independently trainable. While a neural patch can be triggered by the presence of a particular coarse feature, identified by a “parent” neural patch, the actual activations are the same for all patches in a system. What really distinguishes the different patches from each other is what they are triggered on, i.e. the types of inputs they observe (are trained with). Therefore, a parent neural patch can be refined, and as long as it classifies the same coarse features, there is no need for its child patches to change in any way.

Left: A traditional monolithic neural net, with activations coming from preceding layers. Right: a patch system neural net, with patches that have same activations, but are triggered only on hitting required coarse features.

The use of neural patches enables the extraction of finer and finer details in a way that is not possible with traditional neural networks. This is achieved by specializing patches in the extraction of fine features under the conditionality that certain coarse features have already been observed. In other words the patches in a patch system progressively target finer details not through a producer-consumer relationship (sequentially feeding layers raw output values from prior layers) as in traditional deep neural networks, but through a master-slave relationship (with the master controlling whether the slave is even awake).

As shown above, the slave patches consume the same input activations as their master patches. After all, the features that are necessary and sufficient for telling a monkey from a banana are not great for telling a gnarling monkey from a friendly one, nor for telling the ripeness of a banana. In other words, once one has assessed that its a monkey’s face being observed, one needs to go back and reassess for features that will tell if it is a snarling monkey— and these gnarling features would likely have not been necessary or sufficient for just telling if its a monkey. Therefore, the monkey patch is a master controlling the activation of the gnarling patch, not a producer feeding the it. This distinction has great implications for how such a design can be employed in artificial neural networks.

Patch Model for Artificial Neural Nets

The first advantage of using a patch system model in artificial neural networks is that it allows for gradual development, adding the ability to detect further details (as “patch plugins”) when the need for them emerges. Moreover, these plugins can be developed in ad-hoc fashion by disparate groups, and can be individually “upgraded” as better versions emerge. This is opposed to the traditional approach of centralized end-to-end training of pre-defined neural networks (possibly with transfer learning and weight locking). The second advantage of using a patch system is that it introduces an element of programmability and understandability, as the conditions for triggering neural patches are tangible software if-conditions.

The only requirement of the master patch configuration is that its outputs have a logistic loss function, such as soft-max. Individual output neurons will thus indicate the strength or presence/absence of a certain feature, which can subsequently be used to trigger slave patches. This will be incorporated into the base network we will be using as example shortly, so you don’t really need to worry about it when getting started.

The most significant way in which using a patch system changes the development of artificial neural networks is that it turns it into a data-centric training task, rather than a one of exploring network topologies. Neural patches need not be diverse or complicated in their network design. In fact they can all have simple uniform topologies. What distinguishes one patch from the other is what triggers it and the data it was trained on, not its topology. Therefore, the main work involved in developing a neural patch is actually in refining the training data and its trigger conditions (in addition to the actual training and testing processes itself).

To help with this, we have been working on a couple of simple tools so that patch developers can focus on the innovative stuff, rather than fiddling with framework parameters. We are looking for interested people to try these tools out to train novel neural patches for identifying neat image features. It is actually very fun to test them on the mobile iOS/Android apps and see them trigger and identify fine features in the wild. We’ll even put the neat ones you develop in the Neural Market in your name (so others can use them for free), and we’ll send you a thankyou gift too. So let’s go over how to go about using the tools to create these things.

Training a Neural Patch

The starting point of creating the neural patches that we’ll be talking about is a free self-contained solution available on the Windows Store. It does not require advanced knowledge of neural networks or access to a high-end graphics card (any decent graphics device with OpenCL support is sufficient). All you really need to know is that it performs gradient descent training on a neural patch, starting with Xavier initialization, given a fixed VGG16 base network with soft-max output activation, which extracts coarse features from input images.

There is no end-condition to the training, and it continues indefinitely once started, while reporting a rolling average of the loss function. After satisfactory training, it can be paused and a plugin file with extension .specialneurons exported. This plugin contains the weights of the neural patch, base-network output conditions on which it is triggered, and the output classifications. Additionally an icon image can be provided to be displayed on the inference device when it is triggered. This .specialneurons file can be deployed to mobile devices as a plugin to the accompanying app, for testing in real world settings/light. The only inputs that you need to provide to the training tool are the paths to the base network snapshot and the training images data.

Main fields of the patch training tool (Betect)

The Base network

The base net is what extracts coarse features from raw images. These are used by neural patches once they’re triggered to identify finer details. So first download and unzip this base network configuration file and point the tool’s Network field to the extracted .txt file (the .txt and .data files need to be available in the same path).

This specific base network is what we will be using as it is what is currently deployed in the iOS/Android apps that will be used for live testing. For the curious, it extracts features up to the last fully-connected layer of a VGG16-like network. The neural patches we will be training here consist of a single fully connected layer with a soft-max output activation. Each is trying to identify some nuanced detail in a coarse sub-category of images. The base neural network and neural patch topology can be changed for more advanced usages — which we will not get into here.

Training Image Data

Each neural patch needs its own training data (as that is what distinguishes them from each other). The assumption is that they will be triggered only when a coarse feature is identified, and therefore they should only be trained with data that “belongs” to the trigger category. Structuring this data is the main grunt work in creating a good neural patch. It needs to be structured in a folder with a map_clsloc.txt in its root. This file defines the categories of images that either trigger the patch or are an output category of the patch (and have sample data present in the training folder) or both, with their identifier tag and displayable names.

As a template, this is the base neural network’s map_clsloc.txt file. It consists of three columns, with each row corresponding to one image class. The first column indicates the tag of each class, followed by its output index and its displayable name. The tags of the base network are what are referred to in ImageNet as their Synset. For consistency, it is good practice to stick with this convention when possible. But all that is really required of tags here is that they start with the character ’n’ and be followed with a unique integer number to identify them. The output index indicates the output neuron depth that identifies that category.

For a neural patch to be triggered, its map_clsloc categories must have at least one overlap with the base neural network’s categories. So first chose the categories the patch will be identifying, and those base categories that it will be triggered on. Then add them all to a map_clsloc file. Patches can have categories defined in their map_clsloc for triggering purposes alone, without having that category as an output option (or it even being present in the training data). This is useful for identifying super-categories from among a group of base categories (e.g. identifying big from small dogs regardless of breed).

Example of structuring the classifications of (a) dog breed patch with classifications all within the base net and triggered on the same categories, and (b) spider type patch with types broader than those in the base net and trigger categories limited to those in base net.

Next comes the main work! Add representative images for each category to the training data folder. The more diverse and varied these images are the better your neural patch will become at identify the target features when triggered. The names of each individual image file simply need to start with the identifier tag of the category it belongs to, followed by a underscore character (followed by any other optional string). It is advisable to place images of each category in a separate folder for manageability. The training tool will randomly pick images recursively searching the subfolders once the training process is initiated. To get started, download this as a simple template for a patch’s image data folder (in this case for dog bread detection).

Start the Training

Once the patches to the base network and correctly structured image data have been provided, you can start training a neural patch by simply clicking the train button (shown above). There are also fields to set the batch size, learning rate, momentum, and decay rates. But leaving them at default to begin with should be fine. Monitor the loss rate log to make sure it is declining.

Note that on starting the training process for a new neural patch, the output layer will need reconfiguring to match the intended number of categories of the new patch (identified by the number of rows in the map_clsloc file). This is because the number of output neurons of a new patch will most likely not match the 1000 of the base network. The training tool will automatically adjust this after displaying the dialog box below to verify. Select “Yes” to start the training process with the last layer reconfigured for the number of output categories of the patch.

Message to reconfigure patch output depth.

The main window of the application will display the images randomly being fed to the back-prop training pipeline in real-time. Monitor these images to make sure there is sufficient diversity in the categories of images the patch is being trained with. The rand-tilt-shift checkmark will enable/disable image augmentation, with random tilts and color shifts. This introduces slight regularization by way of data augmentation. There’s also a stop-N-show checkmark to pause training at any point. This will also display the current inference outcome for spot checking, and also provides the option to save a snapshot of the neural patch and export the final distributable patch.

The amount of time the training needs to run depends on the size of your collected training data, the number of categories and the capacity of your graphics card. This is something you will need to experiment with depending on the accuracy you are targeting. Once “sufficiently” trained, it is time to export the patch and see how well it performs on live cell phone camera feeds.

This video is a walkthrough (audioless) showing how to organize the image data and train a neural patch. This specific example is triggered on the shark category from base net, and tried to detect if the shark is being observed from under or above water. What new fine-grained classifications can you come up with?

Export and Transfer the Patch

To export the neural patch as a sharable file, first provide a name for the patch in the “patch description” field shown below. This name will be used to uniquely identify the neural patch when used as plugin. Also provide an optional png file that will be displayed when the patch is triggered. Then simply set the stop-N-show checkmark. This will popup a message box asking whether you want to save the current snapshot or not (in addition to displaying the last inference outcome). If you select “Yes”, the current full snapshot (which can be used for further training) and a .specialneurons patch snapshot (which can be used for transferring to the mobile app) will be dumped in the location of the base network files (tagged with a time-based hash).

Steps for exporting sharable snapshots of trained neural patch.

To continue training after that, remove the stop-N-show checkmark in the main window and then select “No” in the message box (watch out for the message box getting hidden behind the main window).

Test the Patch on Mobile Devices

Now comes the fun part! The free mobile app can be accessed here for iOS and Android. Note that the Android version does require a device with OpenCL support — recent Samsung, LG, Sony, and Motorola devices have it!

Tapping the camera button in the top right corner of the app enables live on-device object detection with the base neural network for the 1000 ImageNet categories. On first launch both apps will perform a rather large download of the bese neural network. So be sure to be on wifi. A progress bar at the bottom of the app screen indicates inference progress through the neural network layers. At the end of each inference, the main identified object’s name is indicated above the progress bar. How fast the per-sample inference proceeds will depend on you your mobile graphics device, as all processing is done on device (not in the cloud).

There is a plugin button (with a plus sign) in the bottom right corner of the app. This is our portal to neural patch plugins. Tapping this button once takes you to our Neural Market where you can download reviewed patches. But long-pressing this button opens a file browser, from which you can plugin any locally stored neural patch. This is how you can test your own developed ones.

Mobile app that takes your trained neural patch as a plugin.

Just transfer your .specialneurons files to your device through email or some cloud share space. Download it locally. Then long-press the plugin button and find the file, and tap on it. If properly formed, the app will indicate that it has now plugged in the new patch. Now when a trigger category for that patch is identified, the patch icon will be displayed and the patch will be triggered to identify the sub-category of image.

Additionally, there’s a snap button in the bottom left corner of the app that can be used to take pictures of objects that are incorrectly identified by a patch. Backing out of live camera mode takes you to the start screen, where the taken image can be forwarded for enhancing your data set with stuff the patch tends to get wrong.

Submit Trained Neural Patches for Review

Once you’ve trained your patch with interesting collected images, send the exported .specialneurons file as an attachment to submission@advancedkernels.com. The extension must be tested on Android and iOS devices, must have a valid trigger icon. Provide a description of what the neural patch identifies, and what classifications it is triggered on. We will test it, and if suitable and interesting will publish it in your name to the neural market for free download by others.

[1] “The macaque face patch system: a turtle’s underbelly for the brain”, Nature Reviews Neuroscience, Nov. 2020.

--

--