Big Data vs ISIS ( Part 2) : Interview with Paulo Shakarian

Synced
SyncedReview
Published in
4 min readDec 19, 2015

Paulo Shakarian is an Assistant Professor at Arizona State University where he directs the Cyber-Socio Intelligent Systems (CySIS) Lab. He is a recipient of the AFOSR Young Investigator Award, the DoD Minerva Award (as a co-PI), and the DARPA Service Chiefs’ Fellowship. Some of his work has been featured in the media including The Economist and Popular Science. He also also authored several books including Elsevier’s “Introduction to Cyber-Warfare” and Springer’s forthcoming “Diffusion in Social Networks.” For the latest news on Dr. Shakrian visit: http://shakarian.net/paulo

Paulo Shakarian is an assistant professor at Arizona State where he also directs the Cyber-Socio Intelligent Systems (CySIS) Laboratory.

7. Recently, as the development of mobile internet and Internet of Things, more and more data can be collected, causing an explosion of data. What changes has Big Data brought to anti-terrorism and anti-crime? How to use them? How to protect privacy?

This is an interesting question, and something that I think about quite often. The challenges brought by technologies such as social media and the Internet of things are significant from both an anti-terror/anti-crime and privacy-preserving perspective. From the anti-terror/anti-crime aspect, the new technologies can lead to more data potentially being collected, but this data is not useful without proper analysis and methods for handling large data sources. Further, technologies such as social media provide criminals, extremists, and others an entirely new way to conduct themselves — and it is challenging to find important information in a timely manner. The protection of privacy is also an important aspect in this as criminals, terrorists, malicious hackers, and the like will use the open-ness of computer platforms to steal people’s identities to raise money, hold computer ransom, and blackmail people. I believe a lot of people do not understand the risks in using many technologies that make much of their information more open than they realize — and makes them susceptible to such attackers. It is important to work to protect one’s privacy against such criminals and extremists and that really starts with better technology education. I think that in the future, we may see terrorist attacks where large numbers of people’s personal information is exposed in a manner that will cause significant economic damage — so educating the population to take measures to protect their personal information is very important.

8. Your work can be used to predict ISIS’s future moves and prevent tragedy from happening to victims. Do you have any plan in mind to take a step further to prevent people turn into terrorists even before they join ISIS?

Yes, we are very much looking into how to prevent people from joining ISIS — as this is highly important. This is a long term goal of our Minerva effort this (see http://www.ibtimes.com/isis-propaganda-researchers-track-neutralize-terrorists-twitter-recruitment-defense-1999625). I am also working on a new theory of “inhibiting” the diffusion of a message in social media — and was awarded a grant by the US Air Force for to study this (seehttps://arizonadailyindependent.com/2015/04/02/asu-computer-scientist-investigates-viral-content/). The idea is to understand what “inhibits” a viral meme in social media — so hopefully we can later engineer ways to stop the spread of extremist propaganda — including recruitment efforts.

9. What do you think future terrorism and anti-terrorism will look like? What role will AI play in it?

I think the biggest area that will become important in terrorism in the future will be in cyberspace. Some terror groups have adopted hacking techniques already, but I think this is only the beginning. The potential for damage is large, and so this will become an attractive area for terror organizations in the future.

10. Except AI, what new technology do you think can be used in anti-terrorism in the future? For example, Virtual Reality, Augmented Reality, drones, gene technology, neuroscience…

Going along w. the above answer, I think cyber-security will play a large role in the future.

11. Elon Musk and Stephen Hawking think AI is dangerous to us, even more dangerous than nuclear weapon, having the potential to destroy human kind. What do you think about that?

Artificial intelligence like many other technologies has the potential to be used for ill purposes and have an adverse effect on humanity — but I don’t see how it has more potential to be harmful than research from a dozen of other disciplines, such as physics, biology, etc. I think AI has just been labelled as “dangerous” by some as there have been some recent advances lately — and this may have been unexpected to some people — especially those outside of the field. That said, such advances should always make us think about long-term implications of science and technology — but I don’t see this as something unique to artificial intelligence.

12. Einstein once said that, peace cannot be kept by force; it can only be achieved by understanding. What do you think is the most important thing for peace?

I think Einstein is right, and I think understanding populations across the world — their culture — their needs — will ultimately lead to a more peaceful world. The good news is that technology can be a key enabler of such understanding — and allow us to gain a better perspective for others.

Report Members: Wangwang , Rita | Editor: Synced

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global