AI, Emerging Tech & National Defense @ SIPRI Stockholm Security Conference

Dr. Lydia Kostopoulos
Published in
6 min readSep 23, 2018


The Stockholm International Peace Research Institute (SIPRI)hosted their annual Stockholm Security Conference with the Munich Security Conference (MSC) this year on September 19–20.

It was an invite only, closed door event with Chatham House Rules. As such, the following includes thoughts which I shared that are attributable to me.

I spoke on the AI and Robotics Plus X panel and addressed two items.

A major outcome of innovation in the fields of AI has been the remarkable progress of autonomy in weapon systems and the networks in which they are embedded. Could you describe for us how current and foreseeable advances in autonomy are changing the way the military might field force and make lethal decisions on the battlefield?

I see the advances in autonomy making an impact in the military in the following three areas:

  1. Situational Awareness
  2. Decision Support
  3. Force Application

*** (An audience member at the conference suggested adding ‘Command and Control’ as a fourth)

Challenges with the military application are: data is limited, for big militaries there would need to be a whole of enterprise approach on how to funnel data into a big data scheme where the inputs were streamlined and machine learning (ML) could learn from the full force instead of segregated parts of it.

One potential aspiration could be to have the data collected and fed to one ‘brain’ so to speak that would manage it — similar to how Tesla has “fleet learning” which leverages all the data from all of the Tesla cars in use and uses that to learn together. With the Tesla fleet crowd sourced driving information it learns and pushes out updates back to all Tesla users around the world. This method could be potentially used for naval ‘fleet learning’ or done so with fighter jets, tanks etc.

I believe two foreseeable advances are worth highlighting in the discussion of how artificial intelligence can help the military enterprise.

Force Management:

Artificial intelligence can help planners more efficiently allocate resources, while reducing duplication of effort, and increase speed of assessing availability to fulfill Request for Forces (RFF). This use-case could be applicable to Global Force Management (GFM), combatant command (CCMD) whether they are geographical or functional; as well as service force management.

Leveraging machine learning and data from previous years such as orders, requirements and resources, there is potential for AI to be able to offer anticipatory needs assessments.

Intelligence Analysis

The tempo of conflict has increased, requiring analysts to process, fuse and understand data more quickly than before, with exponentially more information than before. AI can be leveraged to pull large amounts of intelligence from several types of sources and identify patterns that are not possible by human analysts. Intelligence analysis enabled through artificial intelligence can accelerate the decision-making process by providing new insights around the clock, combined with cloud computing it can be provided on demand around the globe.

What are the promises and the perils that are likely to arise with the convergence of AI with other emerging technologies?

Andrew Ng (former Baido Head of AI) and other leading AI scientists compare artificial intelligence to electricity, in that it will be fundamental and pervasive. Just as everything became more useful when it was ‘electrified’, everything will be come more useful when it is ‘cognified’.

There are more technological convergences than can be covered in one conference let alone one panel discussion. As such, I would like to talk about one convergence that hasn’t been getting as much attention and that is the convergence between artificial intelligence and bio-informatics. There is tremendous potential for AI and biology. Earlier this summer there was a conference organized by the U.S. Army Mad Scientist Initiative on Bio-Convergence and the military and the talks ranged from neuromodulation and learning enhancement, to DNA editing and body enhancements, mind uploading, and brain computer interface (BCI). The conference report can be downloaded here and videos of the conference talks can be viewed here.

[For those interested in the short science fiction story I wrote for this conference here is the link.]

The brain computer interface convergence with AI will be a tremendous force multiplier. It will go far beyond controlling a drone fleet with one’s mind. In an era like this, where the tempo of conflict is increasing, the convergence between brain computer interface, artificial intelligence and weapons systems will be one to reckon with both in speed and lethality.

Bio-Convergence for defense is still undergoing a lot of research and development and while much of the technology is currently at the beta stage, the research today gives us an indication of the desired intent of its use.

Regarding the perilous aspect, things will truly become more complex, we have more data than we can understand and process into actionable intelligence (this applies both in our personal lives and in our professional lives, and very much so for the military environment).

We are creating more technological complexity by adding more technology to the mix, most times these technologies are not designed with security in mind, but functionality. We have a mix of legacy and new systems that are communicating and merging, and the threat landscape continues grows. As we can see the technologies converge in the future between, AI, Internet of Things (robotics or otherwise), and the combat cloud, much of the decision making may become delegated out of our control. One of the biggest risks we are creating is similar to the one facing those in the cybersecurity and cyberdefense world — the reality that we have leaped off a cliff into the wonders of the digital world, embracing each new technology without prejudice as we soar through the sky with confidence that there are no ‘real’ risks, and those that do exist are taken care of by the IT department.

We already know what happens when a serious cyber attack wipes the hard drives of corporate thousands of computers (ARAMCO), we have seen cyber attacks have 2nd and 3rd order effects that cross borders and find themselves in unexpected countries and networks (MAERSK), and have seen countries (Estonia) and cities (Atlanta, Georgia), be either brought down to their knees, or forced toreconsider their security posture after a wake up call incident.

I would hope that these lessons will guide us to preparing a more robust contingency for when attacks happen (not if). When for malicious or standard technological malfunction reasons the cognification of our world goes down. When the decision support infrastructure we have surrounded ourselves with can not give us recommendations or situational awareness that we have grown accustomed to, and worse when humans who used to do it are no longer able to, or no longer have been trained to.

In this aspect, we are well prepared with the lessons learned from cyber defense and there is an opportunity to create a culture of security by design with artificial intelligence, develop contingency plans for when the AI decision support infrastructure goes down, and design alternatives in tandem to the design of AI support.

— -

Dr. Lydia Kostopoulos’ (@LKCYBER) work lies in the intersection of people, strategy, technology, education, and national security. She addressed the United Nations member states on the military effects panel at the Convention of Certain Weapons Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapons Systems (LAWS). Her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, and national security. She lectures at the National Defense University, Joint Special Operations University, is a member of the IEEE-USA AI Policy Committee, participates in NATO’s Science for Peace and Security Program, and during the Obama administration has received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. In efforts to raise awareness on AI and ethics she is working on a reflectional art series. She is currently working on a game about emerging technology and ethics which is expected to be out by the end of 2018.



Dr. Lydia Kostopoulos

Experimenter | Strategy & Innovation | Emerging Tech | National Security | Wellness Advocate | Story-telling Fashion | Art #ArtAboutAI →