Gaurav (GP) Pal Of stackArmor On the Future of Artificial Intelligence
An Interview With David Leichner
Security and privacy concerns — Security and privacy remain a top concern in the AI industry. There are many AI security risks, including generated malware and data “poisoning,” or the corruption of data in the AI systems. Implementing cybersecurity risk management techniques can help mitigate some of these risks.
As a part of our series about the future of Artificial Intelligence, I had the pleasure of interviewing Gaurav “GP” Pal.
Gaurav “GP” Pal is CEO and founder of stackArmor. He is an award-winning Senior Business Leader with a successful track record of growing and managing a secure cloud solutions practice with over $100 million in revenue focused on U.S. federal, Department of Defense, non-profit and financial services clients.
Thank you so much for joining us in this interview series! Can you share with us the ‘backstory” of how you decided to pursue this career path in AI?
I have a computer science and engineering background and have always been fascinated by finding new ways to solve problems with information technologies. My career started as a developer, and I increasingly became very interested in turning data into insight and business action. In many ways AI accelerates the ability for us make decisions, gain insights and harness the power of computing, data and code into an almost perfect IT system. The potential of AI to help us solve so many different problems very quickly excites me!
What lessons can others learn from your story?
Staying current and abreast of technology trends is very important to remain relevant. However, the experience hooks to help customers and organizations take advantage of new and emerging technologies come from experience is a passion of mine.
Can you tell our readers about the most interesting projects you are working on now?
It’s been a busy year for stackArmor. We recently announced the formation of our AI Risk Management Center of Excellence (CoE), comprised of leading federal technology experts, to provide further counsel in stackArmor’s work in driving safe AI adoption across the public sector.
Notable CoE members include:
- Suzette Kent, former federal CIO
- Maria Roat, former deputy federal CIO
- Alan Thomas, former commissioner of the GSA Federal Acquisition Service
- Richard Spires, former U.S. Department of Homeland Security CIO
- Teresa Carlson, transformational industry executive with over 25 years of leadership
In addition, we recently announced participation in NIST’s newly established U.S. AI Safety Institute Consortium (AISIC). Among the nation’s leading AI stakeholders, stackArmor will be developing science-based guidelines and standards for AI measurement and policy. As part of this work, AISIC will develop benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm. Adopting AI in a safe and secure manner has been a challenge for public sector agencies because of evolving guidance, standards for risk and a shortage of resources. It’s a privilege to be able to help move AISIC’s mission forward to better serve the federal government and the public.
Finally, we’ve also recently been involved in several initiatives across the education and medical sectors, including supporting research by the National Institutes of Health (NIH)’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program, and assisting the University of Utah School of Medicine with successfully obtaining a FISMA Moderate ATO for the National Emergency Medical Services Information System (NEMSIS). Both efforts are driving the secure adoption of AI across the public sector and ensuring mission-critical outcomes for important public safety initiatives.
None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story about that?
I am grateful to Teresa Carlson, a member of stackArmor’s AI Risk Management Center of Excellence, who has been an inspiring business leader first at Microsoft then at AWS and Splunk. She has been extremely supportive of partners and entrepreneurs looking to build innovative solutions. She helped kickoff many programs that my company directly benefited from and above all I just am grateful for her kindness and always making time for partners — big or small.
What are the 5 things that most excite you about the AI industry? Why?
1. Potential to Innovate — Without a doubt, innovation is the most exciting aspect of the industry right now. AI seems to have endless opportunities, and many companies are at the forefront of exploring this potential in creative and exciting ways. When implemented safely and efficiently, AI has the power to transform the way our society operates.
2. Improvement of Service Delivery — By reducing repetitive tasks and automating processes, AI allows for the improvement of service delivery in both the private and public sector. The end user or citizen greatly benefits from streamlined processes and faster delivery of services.
3. Furthering the U.S. Federal Government’s Mission — I’m especially excited about the opportunities AI will bring to the public sector. We’ve seen over 700 AI use cases across the federal government so far, including in healthcare, transportation, the environment and benefits delivery. More efficient government processes benefit the American public.
4. Leadership — While AI is not a new concept, its explosion in popularity over the most recent year creates an opportunity for many in this space to be leaders and create the standards and guidelines we want to see in the industry. Those at the forefront of the AI industry should work together to reach our common goal of accelerating safe AI adoption.
5. Efficiency and Productivity — Industries with talent gaps and staffing constraints can reap the benefits of AI, by allowing people to spend less time on manual processes and more time to think creatively and focus on mission outcomes.
What are the 5 things that concern you about the AI industry? Why?
1. Security and privacy concerns — Security and privacy remain a top concern in the AI industry. There are many AI security risks, including generated malware and data “poisoning,” or the corruption of data in the AI systems. Implementing cybersecurity risk management techniques can help mitigate some of these risks.
2. The need for an actionable framework — While the U.S. federal government and its industry partners are beginning to shape and enforce guidance around AI, there is still a need for an actionable framework for agencies and organizations to follow. As this framework continues to evolve, organizations should look to existing applicable frameworks such as an Authority to Operate (ATOs), which can help accelerate the adoption of AI in compliance with governance models within industry requirements.
3. AI workforce skills gap — There is a dire need for qualified AI professionals, including in areas such as data scientists, data analysts and further up the ladder with the establishment of a new role, Chief AI Officers. Much like there is a need for skilled cybersecurity professionals, there is a growing need for skilled AI professionals.
4. Regulations — Organizations across regulated and public-facing industries, like those in healthcare, government, and financial services, must meet benchmarks and have auditable solutions to protect their consumer. A lack of regulatory guidance makes it difficult for these industries to effectively implement AI solutions.
5. Bias and ethics — Many are rightfully concerned with the inherent bias that AI brings. Because AI is populated by data across the internet and from human input, there is the potential for human bias to be brought into the systems, creating ethical concerns. Thankfully, the U.S. federal government and many other organizations are working on addressing this issue through initiatives like NIST’s AI Safety Consortium.
As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI poses an existential danger to humanity. What is your position about this?
Artificial intelligence provides many benefits, and the White House has recognized its potential with issuance of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With this executive order and many other federal initiatives, we’ve also seen that AI has the potential to bring forth new job opportunities. The EO explicitly directs federal agencies to create the role of the Chief AI Officer, which I expect to become a trend amongst other large enterprise companies as well. With the advancement of AI, many organizations are seeking data scientists and other security support to further safe AI adoption.
However, there is a caveat — while AI brings many potential benefits, it also can bring harm if not used responsibly. Those that argue of the dangers of AI are most likely concerned with its possibilities to cause chaos and disruption, which can be mitigated with best security practices and protocols. For example, NIST recently released a publication warning regarding privacy and security challenges arising from rapid AI deployment. The publication furthers the argument that security needs to be the first priority for those implementing AI.
What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?
To assure the public that AI adoption is positive, organizations implementing the technology must do so in a secure and safe manner. The federal government has been tackling this issue under recent guidance. As a follow up to the AI Executive Order, the White House released a fact sheet announcing key actions as three months after its original announcement. One of the key areas of progress is around managing risks to safety and security; the most crucial area completed being risk assessments covering AI’s use across critical infrastructure sectors.
Everyone has a role to play in keeping AI safe to implement, from the federal government, to platform providers, and engineers building and deploying solutions. As such, all stakeholders have a greater responsibility to ensure safe and secure applications. This can be done by having an explicit Authority to Operate (ATO) safety & security governance model that ensures an auditable series of steps have been taken to deliver safe solutions.
Through my work at stackArmor, we have committed to developing solutions that enhance and adapt existing cyber risk management frameworks to mitigate some of the possible challenges related to AI. By addressing safety, bias and explainability, we are seeking to create actionable guidance to assess AI systems and effectively use this technology as a benefit without all the added risks.
As you know, there are not that many women in your industry. Can you advise what is needed to engage more women into the AI industry?
I’ve had the privilege of working with many outstanding and exemplary women through the course of my career, with three admirable women as core members of stackArmor’s AI Risk Management CoE: Suzette Kent, Maria Roat and Teresa Carlson. My advice to other executives in the industry is to uplift the voice of women experts in your circle and work with those within your network to see where you can involve women in leadership positions. To women looking to break into the AI industry, there are plenty of emerging opportunities to get involved, especially with a career within the federal government and federal technology community.
What is your favorite “Life Lesson Quote”? Can you share a story of how that had relevance to your own life?
My favorite life lesson quote is, “Hard on the problem, soft on people”. Being a business leader is all about being great with people — motivating them, supporting them when things go wrong and celebrating their successes. Building a people friendly organization especially for a high value consulting organization is critical. I have been lucky to have had supportive bosses who nurtured my growth, coached me when something went wrong and above all provided a runway to allow me to grow as a person and professional.
How have you used your success to bring goodness to the world? Can you share a story?
I have a strong passion for technology and how it can do good for humanity. I got involved in First Lego League (FLL) and the FIRST organization committed to fostering the development of next generation of technologists. Mentoring middle-school and high-school students, showing them the art of the possible with technology and see them applying problem solving skills in a competitive environment is amazing. I am very happy to sponsor and support the next generation by volunteering for such efforts.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)
In many parts of the world, the girl child is often neglected and does not get the same attention as a boy might get. Educating young girls and making sure they have access to education and technology is something I care about deeply. A movement around this would help us solve so many social challenges.
How can our readers further follow your work online?
Readers can follow stackArmor’s work through our website. Our blog has everything from our latest company news to deeper dives into industry trends and important government technology initiatives.
This was very inspiring. Thank you so much for joining us!
About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is the Chairman of the Friends of Israel and Member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.