By Sridhar Rajagopalan and Michael Endler
Most us know bots can be bad. We hear them mentioned when a distributed denial of service attack knocks a major website offline, for example, or when bad actors try to covertly manipulate the prices of products or stocks.
Fewer of us know that good bots are responsible for many of the modern digital experiences we enjoy — including connecting the components that deliver most modern connected experiences.
Business today relies on an enormously complex IT landscape. An application may rely on a variety of digital assets hosted in a variety of locations. Some of these assets may be managed by the company producing the application, some may be managed by partners or service providers, some may reside in on-premises legacy systems, some may reside in private or public clouds, and so on.
Bots are indispensable to this process; this heterogeneous diversity of systems is generally composed into useful end user applications and experiences via application programming interfaces, or APIs, that automate the connections. To carry out their work, bots make API calls to different systems — that is, modern software development paradigms typically demand that bots be used to make sure functions and data surface when and how they are supposed to in order for an application or digital experience to run.
When a business makes automation easier for good bots, it also generally runs the risk of making it easier for bad bots too. How can enterprises enjoy the benefits of good bots while stymying the bad ones? Here are three tips that have, in the experience of Google Cloud’s Apigee team, proved effective in striking this balance:
Establish software development and deployment processes that minimize the potential for human error or sloppiness.
Many security vulnerabilities emerge not because of deliberate neglect or nefarious intent but because of simple human error. For example, if a business uses APIs to connect systems and make digital assets available to developers, it can use an API management layer to apply a common set of security precautions, such as encryption and authentication, without relying on individual developers to implement such measures themselves. The better a company’s software and deployment processes are managed, the fewer opportunities that company affords to attackers.
Use good bots to target and stop bad ones.
Digital business operates at a pace and scale that manual processes cannot possibly match. When an application may receive millions of calls per minute, machines have to be used to mediate the transactions — which means that to stop bad bots, enterprises should deploy good bots. The good bots can detect suspicious behavior, such as an unexpectedly high number of API requests over a very short period or requests coming from a suspicious source, to distinguish legitimate users from attackers — and to block the latter.
Use machine learning to adapt to evolving threats.
Cybercriminals are rarely complacent — which means enterprises can’t be either. Keeping pace with attackers is quite literally the cost of success. When bad bots use new techniques, enterprises can’t assume that old algorithms, based on old attack vectors, will save them. Rather, businesses should take advantage of machine learning capabilities that allow them to detect changes in attacker behavior and adapt.
Bad bots may be a fact of life for modern businesses — but that doesn’t mean that successful attacks have to be. With the right approach to software development and governance, good bots to combat the bad bots, and machine learning to keep the good bots smarter than their opponents, organizations can enjoy the benefits of automation while protecting themselves from the dangers.
[Looking to learn more about API security? Get your copy of our recent eBook, Inside the API Product Mindset: Building and Managing Secure APIs.]