Polaris Assist: An AI helper for the good guys in the software wars

Taylor Armerding
Nerd For Tech
Published in
6 min readMay 13, 2024

--

Perhaps you’ve been hearing, along with the rest of humanity, about all the malignant, dystopian things that can be and are being done with artificial intelligence (AI). Everything from bias to cheating, deepfake porn, privacy violations, autonomous weapons, social engineering attacks, and yes, hackers using it to craft malware to get access to everything from your bank account to the apps on your phone, your car and smart home security system.

But all is not lost. As experts have been saying all along, the best way to fight bad guys using AI is with good guys using AI. And when it comes to defending software against AI-driven mischief, one of the new tools to do that is Polaris Assist, an AI-powered application security assistant on the Synopsys Software Integrity Group’s Polaris Software Integrity Platform®.

Disclosure: I write for Synopsys. But I’d write about this even if I didn’t, because the only way to defend against the speed and power of the criminal use of AI is, as the experts say, with the speed and power of ethical AI.

The Polaris platform has been around for more than five years. Its goal has always been to help software developers build secure, quality code faster through the integration of multiple software testing tools into a single platform.

A year ago, Synopsys launched a major upgrade of the platform to help developers cope with the escalation of those competing pressures — speed and security — by providing automated software testing tools aimed at securing proprietary, commercial, and open source code.

This latest upgrade combines large language model technology with the company’s existing application security knowledge and intelligence — robust coding patterns, vulnerability detection rules, and a vast open source knowledge base — to help security and development teams take their game to the next level. The AI benefits include

  • Easily understood summaries of detected vulnerabilities, their potential risks, and remediation guidance in the context of the code that the developer is working on. In other words, it helps developers set priorities based on whether a defect is trivial or severe.
  • AI-generated code fix recommendations that developers can easily review and apply or adapt directly into their code. You’ve heard of spellcheck? This is a similar idea, designed to help correct mistakes in coding.

As Jason Schmitt, general manager of the Synopsys Software Integrity Group put it, “Our goal with Polaris Assist is to automate repetitive or time-consuming AppSec activities so developers can spend less time dealing with security issues and more time innovating.”

That goal mirrors what has been a mantra at security conferences for more than a decade: The best way to get developers to write secure code is to make the secure way the easy way. That means it won’t slow them down.

Indeed, the need for speed is obvious. If your organization doesn’t get your product to the market quickly, one or more of your competitors will.

Uphill battle

But speed is an increasingly heavy lift. Corey Hamilton, security solutions principal with the Synopsys Software Integrity Group, wrote in a recent blog post that “Despite the improvements brought by modern DevOps practices and application frameworks, [increasing development velocity] is an uphill battle due to an ever-growing list of applications that need to be maintained, conflicting requests for developers’ time, and a seemingly endless list of potential security threats.”

Not to mention that modern software is increasingly complex, which means the vulnerabilities in it are increasingly complex as well.

So the need for better software security should also be obvious. An unending stream of headlines documents catastrophic data breaches and ransomware attacks enabled by weaknesses in software — vulnerabilities that too many users fail to patch, even when a patch is available.

One example from last August was a software defect in a device from Japan-based Contec called SolarView, used by hundreds of solar farm operators to monitor power generation, storage, and distribution. That defect, a command injection vulnerability, can allow attackers to execute malicious commands. It had been public for more than a year and had received a severity rating of 9.8 on a scale of 10 from the National Vulnerability Database (NVD).

According to Contec, a patch for the defect had been available for months. But multiple researchers using Shodan, a search engine for finding servers connected to the internet, reported that more than 600 SolarView devices were reachable on the open internet — the equivalent of leaving the doors to your house unlocked.

SolarView is not an outlier — vulnerabilities in software are rampant, in large measure because the pressure on developers for speed tends to trump everything else, including security.

The most recent annual “Open Source Security and Risk Analysis” report by the Synopsys Cybersecurity Research Center, based on analysis of anonymized data from 1,067 commercial codebases across 17 industries, found that 84% of the codebases assessed for risk had at least one open source vulnerability, and 74% had high-risk vulnerabilities, up from 48% the previous year. High-risk vulnerabilities are those that have been exploited, already have proof-of-concept exploits, or are classified as remote code execution vulnerabilities.

The goal of tools like Polaris Assist is to use AI to help make sure all the “doors” are locked before a software product goes public — without slowing development.

Automate, automate!

“Automating the configuration of threat detection systems, and other security measures based on the application’s specific needs and threat landscape not only speeds up the development process but also ensures that security is baked into the application throughout its development lifecycle,” said Debrup Ghosh, senior security solutions manager with the Synopsys Software Integrity Group.

Beth Linker, director of product management with the Synopsys Software Integrity Group, noted that this is the first use of Generative AI in the Polaris platform, so there are no specific metrics yet on how much faster it will make development.

“But we expect to see improvements,” they said. “We also expect that remediation will be a less dreaded task with these capabilities and that developers will find it easier, particularly when they are working with less familiar code — which might include new AI-generated code or modules that are new to them or that haven’t been touched in a while.”

Indeed, Hamilton noted in his blog that the research firm Gartner has reported that “organizations that automate their security activities experience an estimated 15% improvement in meeting both security and delivery targets.”

In short, while Polaris Assist is more evolution of the platform than revolution, it is a significant upgrade that will help lessen the tension between the competing needs for speed and security. Polaris will continue to do what it has been doing — helping developers deliver secure, quality code — but will make it easier and faster.

“Polaris Assist’s capabilities are intended to complement the existing capabilities of Polaris and will likely continue to evolve over time,” said Hamilton.

Verify, then trust

It’s important to note a caveat that applies to anything AI related — the “artificial” component. Being artificial doesn’t necessarily make it worse. In some ways it can be much better — artificial wood doesn’t rot like natural wood does. Artificial turf doesn’t get torn up during a football game like natural grass. And has been demonstrated in numerous ways, AI can do repetitive, monotonous, time-consuming tasks much faster than humans, without ever needing a break.

But it is different. The fact that AI “intelligence” is based on the data it has been “fed” by imperfect humans means that it still needs some human oversight. So, to borrow from President Reagan’s “trust but verify” motto, developers shouldn’t trust AI recommendations until they verify them.

“Developers should never trust AI-suggested fixes enough to cut and paste them without review,” Linker said. “Not ours, not anyone else’s. As a developer, you own your code and should always make sure you understand it and that it does what you want it to do. We have tested and tuned our prompt generation to reduce hallucinations, but they are always a risk.”

The good news, however, is that those risks will diminish. It’s still early in the AI era. “Polaris AI issue summaries and fix suggestions are just the beginning; we’ll be integrating more capabilities into other products in the future,” Hamilton said.

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.