Protecting financial assets and sensitive information is a critical need for modern organizations, especially in today’s rapidly evolving digital landscape. FactSet, a global financial digital platform and enterprise solutions provider, places significant emphasis on ensuring optimal security of its products.
Artificial Intelligence (AI) dominates daily headlines. At FactSet, we’ve harnessed the power of AI, developing a strategic approach to safeguard its utilization while maximizing its potential benefits, including providing exceptional client service.
Generative AI refers to artificial intelligence systems capable of creating anything that can be digitally represented such as text, images, music, speech, and even complex data models. This technology presents immense potential to streamline operations, improve decision-making, enhance customer interactions, and tweak risk management strategies within the world of financial services and beyond. FactSet recently debunked some AI myths. On the flip side, however, the same power can be manipulated to enhance cyber threats causing financial loss and reputational damage. Recognizing these dueling powers of generative AI, FactSet is focusing on a comprehensive, layered security strategy to secure its operations.
FactSet is implementing policy and processes around generative AI usage to control what types of data are sent, what is approved for responses, along with access, logging, and governance controls. FactSet does not use AI tools which do not have their own enterprise ready policies, including guarantees to keep our data private rather than using it to train future models.
Moreover, FactSet is building out best practices and training for employees. Many of our developers who are working with large language models (LLMs), a common type of generative AI, have been introduced to the recently released OWASP Top 10 LLMs, OWASP is a community-led nonprofit foundation that provides guidance on software security. For support teams we have discussed evolving threats, like how recorded audio or video is no longer as trustworthy of a validation source as it has been historically. Many of the security tools FactSet uses to protect itself are being enhanced with machine learning and artificial intelligence. Our organization launched GenAI FridAIs that include educational sessions, a prompt of the week, and internal tool training, like our enterprise safe interactive chat instance chat.factset.io (We Gave Our Employees Access to ChatGPT, Here’s What Happened). Each week thousands of employees interact with these resources.
Furthermore, considering how data is the fuel of generative AI, and that FactSet owns a lot of trusted data, FactSet is updating our data governance policies considering this new technology. This entails strict controls on data access, clear delineation of roles and responsibilities, regular audits to check compliance, and establishing protocols to report incidents as soon as they happen. By laying such a robust foundation, FactSet limits the scope of misuse of data during collection, processing, or storage stages.
In conclusion, FactSet’s strategy for securing generative AI is dynamic, multifaceted, and firmly rooted in the understanding of potential threats that could arise. With an intricate weaving of strategic planning and state-of-the-art technology, the company is building a secure environment that not only fuels generative AI usage but also safeguards it against possible threats. By doing so, FactSet demonstrates how innovation and security can be effectively coupled to deliver impactful financial solutions without compromising on safety and reliability.
Author: Christopher Hanks (Principal Cybersecurity Architect)
Editors: Gregory Levinsky (Marketing Content Specialist) & Josh Gaddy (VP, Director, Developer Advocacy)