The Cyber threats in a Chatbot

Avi W
Chatbot.com Blog
Published in
4 min readJan 13, 2017

With the big hype around the buzz word “Chatbot” comes a big responsibility to keep the bot safe from attackers.

Cyber-attacks are all around us and keeping up with them is hard which is making defending from them just harder. When developing a “Chatbot” it is important to take into consideration the potential security threats. Here are some potential attacks that can be used against a “Chatbot”:

Injections — Ever heard of an SQL injection? Well if you haven’t, it is a type of attack that lets you inject commands into the database and pull information that is outside of the query’s scope. This can be done also in a Chatbot. If you are developing a Chatbot that uses a database don’t be lazy and make sure to have a database user that is restricted to relevant resources and to escape each query along with prepared statements.

XSS — Believe it or not the XSS attack can be found in a Chatbot too. An example of this would be an input the bot takes and saves in the database to be outputted later. This is something you should make sure you block. Have all characters escaped and if possible, a filtering system would only add to the defense system.

Social engineering — Some Chatbot’s redirect the user to a live chat. This live chat can be a huge problem and in fact can lead to a big breach in the organization’s network. Social engineering is a big threat and no matter how much you train your employees they will never know the difference between a malicious file and an innocent file. If I am an attacker I can easily convince the person on the other side to open up my image of a technical bug that I found. It is a matter of seconds before the organization has a Trojan on one of its computers. The suggested practice would be to make sure the live chat team will not actually be connected directly to the organization but instead have multiple gateways in between.

Identity theft — How easy do you think it would be to trick a Chatbot? I can bet very easy. Some Chatbot’s are developed to use one identity key to access a whole set of services. For example: If I am a delivery company and I want to offer tracking information then I would ask for a tracking number. Now what if I want to also offer the shipping modification? Such as address or phone number modification? Well surprisingly modern bots have not added another step of verification and they simply use the basic tracking ID provided. How can you stop this? Provide normal login credentials and yes it is annoying and extra development but it is worth it in the long run.

Encryption — Make sure your Chatbot uses an encrypted channel when discussing important things. If you are providing some kind of service that replies or asks for sensitive information make sure the channel is encrypted properly. Don’t let your user be vulnerable to a Man in the middle attack because it was hard for you to generate a proper certificate.

These are just a few out of many attacks that are possible so next time you develop or think of having a Chatbot remember to take responsibility and patch up your Chatbot, attackers will have no mercy.

--

--