“The Art of the Possible”

Katerina Sedova
Spyglass
Published in
7 min readApr 23, 2017

This is week 14 of the inaugural Hacking4Defense class at Georgetown. Team Spyglass is nearing the end of our journey within the framework of the class. We started down the H4D road with a broad challenge of “analyzing social media for identifying and predicting probable hotspots of social unrest” from our sponsors at the Asymmetric Warfare Group (AWG), US Army. We will be ending the class with a minimum viable product (MVP), which addresses a specific scenario within this broad challenge, a scenario where we can add value in force protection by applying image and facial recognition technology to alert troops overseas when a picture of military vehicles, military personnel, or an individual soldier’s own face is posted on social media without her knowledge, preventing a series of threats in general and crowd-sourced attacks in particular.

We have come full circle in our H4D experience. In week 3, on our very first “get-out-of-the-building” (in lean startup parlance) exercise, we visited Fort Meade to “walk in the shoes” of AWG operational advisors (OAs), to understand their workflow, pain-points, and the experience of troops they support. Little did we know that only 11 weeks later, we would be back at Fort Meade to show Spyglass in action, testing our MVP and getting feedback before our final push!

Getting Out of The Building, one last time

On Friday morning, Jose, Chloe, and Katya did the first demo of Spyglass outside of class and our Georgetown location. In preparation for our visit, Jordan, who has been doing all the demoing of our MVP in class, did the knowledge transfer to Jose and Katya to ensure that we could run the MVP and troubleshoot potential issues without him in the field test, as he was not able to join us in Fort Meade. Lo and behold, of all the glitches we were anticipating, we did not expect that our conference room would have no wifi!

Jose and our AWG advisor Kyle quickly found a workaround, and we were able to present Spyglass and test it without a hitch. While Jose explained the inner workings of Spyglass and drove the live demo, Chloe and Katya conducted a loose focus group to capture the impressions of seven OAs and two of their commanders.

Overall, the feedback we received was quite positive. Our goal was to test the fundamental hypothesis: “Would our beneficiaries use this tool if they had access to it?” The answer of the AWG OAs and their commanders was a resounding yes! The next level of feedback could be summarized in the following three categories: 1) the current capabilities of the tool, 2) future capabilities/feature requests, 3) pointers on our future briefing(s) to the military leadership.

Testing Spyglass with AWG Operational Advisors

The first set of key insights focused on the current capabilities:

  • The tool looks “tremendously useful”
  • For OAs, the simplicity and light weight of SMS alert is a big win, as other tools they currently use are “clunky” with sometimes too much information, making it difficult to tease out urgency
  • Spyglass can also be useful for the “insider threat” counterintelligence portion of force protection to identify unauthorized posts and images of military assets
  • Having data differentiation (vehicles vs general personnel vs specific individual) in the feed is helpful, especially if the user can customize which feeds she wants to receive.
  • Being able to train the neural network on photos of specific equipment, much like we do with individual, is tremendously helpful in detecting when pictures of convoys are posted on social media, while they are moving from one Forward Operating Base (FOB) to another or are on a resupply mission. Similarly, detecting pictures of physical locations (e.g. front gates of a base), would be tremendously helpful in force protection.

The second category of key insights revolved around what is possible going forward and may lead to future functionality:

  • Can Spyglass be trained to differentiate U.S. vs foreign military equipment/uniforms?
  • Dashboard view of the feed on a desktop would be helpful, as in the conventional army these feeds will likely be monitored by the S2/S3 intelligence or operations officer. The S-2 is also likely going to be operating in a classified environment and without a phone. So alerting would need to work through another mechanism, potentially email or visual dashboard alert.
  • Can bounding geographic rectangle be modified by the user to expand the area of coverage? Similarly, can geographical location be set by the user on the ground, to account for frequent short-term travel?
  • Can the system be trained to identify adversary drones in real time?
  • Counter surveillance consideration: if the phone number from which the alert is sent is always the same, it may draw attention of adversary’s surveillance. Is it possible to obfuscate the origin of the alert?
  • In addition to the hyperlink to the image that was posted, can alert include additional info, such as a handle from which the image was posted on social media and a map with a pin of location from where the photo was taken?
  • Training the tool on the photos of specific individuals assumes significant responsibility for their personally identifiable information (PII). Spyglass must have a story around storage (if any) and how it will protect against the potential cyber theft of this data.
Showing off facial recognition: training the model to recognize Chloe

The last category of feedback centered on how we speak about Spyglass in pitches and briefings to the military:

  • Start the pitch with the threat we are trying to detect, rather than a problem statement, will help frame the conversation to the environment in which they operate
  • Be able to explain how this solution fits into the original problem challenge presented by the leadership
  • Be able to answer the question under which legal authority this tool would operate (Title 10 vs title 50)

Additional interviews we have conducted this week with C., former Special Forces Green Beret and L., former Intelligence officer, US Marine Corps., have also echoed this feedback.

Reflecting on the H4D experience and story-boarding our final video

Finally, as we near the end of our H4D journey and build our final presentation for the class, we have been looking back on the wild ride we have had with H4D. Through nearly 120 interviews of potential beneficiaries and technologists working in this space, we heard a wide spectrum of pain-points and ideas of what they think is important to glean from social media for force protection: from atmospherics to population sentiment, from social network analysis to the nuanced text processing and translation of ontologies, from geolocation and map overlays to geo-inferencing. Through beneficiary discovery, we — a group of graduate students without any military experience — learned a tremendous amount about the scope, workflow, pain-points, and the environment, in which our men and women in uniform serve around the world. Through “getting out of the building” experiences and mentorship of technology partners, we got glimpses of the magnitude of great work being done in this space, of what we didn’t know we didn’t know, and the scope of opportunity. Through our own playing with code to work around an obstacle of not having a pre-tagged dataset of threatening posts, we learned by building and ventured deeper into what is possible with image recognition.

At times what we heard in the beneficiary interviews sent us spinning in a circle of contradictions or running in opposite directions, as we tested our hypotheses, fine-tuned how to ask questions more effectively, and learned to tease out the essence of what beneficiaries actually need from what they said they needed. At times, the internal detractors — the naysayer within — almost got the better of us, overwhelmed by the complexity and nuance of our challenge. When the naysayer within says “we can’t”, this team said “we can” and it did. At times, this project eclipsed all other academic and job commitments, and yet, at times, we wished we could be working on it full time.

Our team is stronger than the sum of individuals within it, individuals who are human. Startups rarely begin with a group of people who barely know each other, who don’t already have an established dynamic of trust, communication and boundaries, coming together and hitting the ground running. As a team, we’ve had our moments when passion, personality dynamics, communication patterns, frustration with our direction or process got the better of us. That we were able to gel together and deliver the Spyglass MVP in the short 14 weeks is the testament to the professionalism, unique skill-sets, and the work ethic of the individuals within our Spyglass team, stronger together. Our skins have grown thicker through the relentlessly direct yet constructive precision questioning of our teaching team and the mentorship of our advisors.

So what’s next for our last week of H4D? We will be integrating the facial recognition into Spyglass end-to-end, continuing to reflect on the totality of our H4D experience for our final slide-deck, and building our final video. Tune in to our final presentation on May 1st as we look back on our experience and show off our MVP to the senior leadership of AWG.

The journey of Spyglass through the H4D classroom is ending. In 15 weeks, we will have shown the “art of the possible”, yet only scratching the surface. Will this journey’s end be another one’s beginning?

--

--