C-Suite Perspectives On AI: Pieter Danhieux Of Secure Code Warrior On Where to Use AI and Where to Rely Only on Humans

An Interview With Kieran Powell

--

Building a simple calculator is a lot different from navigating the compliant configuration of a payment gateway, so exercising discretion and resisting the temptation to outsource complex issues to AI tools is paramount.

In artificial intelligence (AI) continues to advance and integrate into various aspects of business, decision-makers at the highest levels face the complex task of determining where AI can be most effectively utilized and where the human touch remains irreplaceable. This series seeks to explore the nuanced decisions made by C-Suite executives regarding the implementation of AI in their operations. As part of this series, we had the pleasure of interviewing Pieter Danhieux.

Pieter Danhieux is the Co-Founder and CEO of Secure Code Warrior, a global cybersecurity company that makes software development better and more secure. In 2020, Pieter was recognized as a finalist in the Diversity Champion category for the SC Awards Europe 2020, and was awarded Editor’s Choice for Chief Executive Officer of the Year by Cyber Defense Magazine (CDM), the industry’s leading electronic information security magazine. In 2016, he was featured on the list of Coolest Tech people in Australia (Business Insider) and was awarded Cyber Security Professional of the Year (AISA — Australian Information Security Association).

Before starting his own company, Pieter was a Principal instructor for the SANS Institute, teaching military, government, and private organizations offensive techniques to target and assess organizations, systems, and individuals for security weaknesses. He was also a co-founder of NVISO.EU, a cybersecurity consulting company in Europe. Prior, Pieter worked at Ernst & Young and BAE Systems. He is also a Co-Founder of BruCON, one of the planet’s most awesome hacking conferences.

Thank you so much for your time! I know that you are a very busy person. Our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

Since the early nineties, I have been fascinated by how gadgets and electronics work. I used to drive my family crazy by pulling apart the family computer and putting it back together, and my mother came home more than once to discover yet another of her radios in pieces on the bench.

Before long, this obsession entered the software realm, and I began testing its limits and finding ways to break it. I never really stopped, and eventually forged a passion for cybersecurity that has propelled me from a 90s script kiddie to where I am today: a certified cyber geek with lots of responsibility, and a company I run with amazing people by my side.

Regarding the official elements, I’m a globally recognized security expert, with over 12 years of experience as a security consultant and 8 years as a Principal Instructor for SANS. I began my information security career early in life, and was one of the youngest people in Belgium to obtain the Certified Information Systems Security Professional (CISSP) certification. This inspired me to collect a whole range of cyber security certificates (CISA, GCFA, GCIH, GPEN, GWAP) — and I’m proud to say I’m currently one of the select few people worldwide to hold the top certification of GIAC Security Expert (GSE). Until the day I became a co-founder & CEO of a cyber start-up where my skills morphed from commanding computer systems to leading a company with 230 employees around the world.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

As a co-founder of a fast-growing global start-up based in Australia, you need to learn incredibly fast about various business aspects you’ve never dealt with before. I was (and am) a technical geek, but I dropped out of my Master’s degree at university and failed my accounting classes in my Bachelor’s.

As a technical CEO, you need to upskill yourself in sales and marketing, finance, and employee culture at an incredible speed. At the same time, you also learn quickly that “cash is king, and cashflow is queen”. I knew I needed boots on the ground in the USA — our biggest target market — but I couldn’t afford great salespeople. So, I hired a part-timer with commission fees on success only, who also flew commercial airplanes with Jet Blue as a senior pilot, and managed to engage in stock market day trading while flying the plane (as apparently planes fly themselves nowadays). He also had a side gig selling real estate in Florida… and now he was our first Enterprise Sales person in the US and it didn’t cost me a cent.

What did I learn six months later? You pay peanuts, you get monkeys.

Are you working on any exciting new projects now? How do you think that will help people?

We are always working on exciting new developments, and we’ll be revealing something quite game-changing very soon. I am sworn to secrecy, but I’d suggest checking in with us later in the year.

Thank you for that. Let’s now shift to the central focus of our discussion. In your experience, what have been the most challenging aspects of integrating AI into your business operations, and how have you balanced these with the need to preserve human-centric roles?

What we are seeing at the moment is possibly the most public-facing, and certainly the most publicly accessible form of artificial intelligence to date, and as such, the hype surrounding it is palpable. It can be quite difficult to separate fact from speculation, and naturally many companies with AI as core to their product offering will have a vested interest in marketing it as a miracle solution.

We have seen some productivity gains in areas like reporting from the use of AI technology, and overall it has been interesting to experiment with in different areas of the business. However, it is a “companion”-style tool rather than a direct replacement for an experienced person in the same role. Companies who have rushed to shed staff in favor of AI replacement likely found themselves in a predicament as they worked to separate errors from perceived efficiencies.

For developers, AI coding tools are the flavor of the month, and their use can assist their productivity, but only if utilized safely. The most challenging aspects of integrating these AI tools into business operations will be determining which outputs are trustworthy, which tools you and your company can trust to become part of your tech stack, the strengths and weaknesses of each tool, and how you can ensure results are consistent if everyone is using a different tool/process.

At Secure Code Warrior, we’re exploring the most effective ways to test each AI engine, as well as the most effective ways to analyze when developers can be trusted with responsible AI use, based on their level of security awareness and critical eye and insight into their project as a whole.

Can you share a specific instance where AI initially seemed like the optimal solution but ultimately proved less effective than human intervention? What did this experience teach you about the limitations of AI in your field?

Our initial experimentation with AI showcased its limitations quite plainly, and subsequent tests have revealed that similar contextual issues remain despite technological upgrades.

We asked a leading LLM to provide code for a simple login routine, which it did within minutes following a descriptive and technically accurate prompt. What followed was a perfectly useable code block, which a developer could implement into a project and gain the intended functionality.

This sounds great, but the moment it was assessed with security in mind, it was found to be not just inadequate, but potentially very dangerous if introduced into a project. Even prompting for a fix took far too many attempts, and it’s far too easy for the LLM — with no contextual knowledge of the overall project, nor critical thinking into how the code will be implemented — to make grave errors that result in exploitable vulnerabilities.

It became abundantly clear that while AI coding tools can provide something of a productive “pair programming” experience, its output must be assessed and overseen by security-aware developers. And that’s something we help achieve within development teams.

How do you navigate the ethical implications of implementing AI in your company, especially concerning potential job displacement and ensuring ethical AI usage?

Will AI entirely replace developers for security-related tasks? No, and we try reinforcing this mindset with development teams and organizations. Job displacement should not be a concern unless developers aren’t making any effort to advance their own skill sets or learn how to leverage AI effectively and responsibly.

We see the mastery of AI in secure software development becoming a valuable business asset, so this should leave developers with a desire to learn more. The “average” developer handling tasks that eventually may be automated with AI should be encouraged to upskill and take on more secure software architecture/design leadership, while bringing in AI to handle more mundane tasks– with sufficient review for any hallucinations or AI-borne vulnerabilities, of course.

When it comes to ethical AI usage, we encourage developers and teams to view AI as a helping hand on quick fixes or as a pair programming partner, not as the foundation or crutch behind one’s development skills. Only humans can (and should) provide valuable oversight when dealing with areas such as compliance requirements for data and systems, design and business logic, and threat modeling practices for developer teams.

Could you describe a successful instance in your company where AI and human skills were synergistically combined to achieve a result that neither could have accomplished alone?

It is imperative for development teams to acquire foundational security skills to ensure code is protected from the start. That said, we’ve found that traditional upskilling efforts tend to fail because they are too rigid, are based on irrelevant information and context, and cannot keep up with today’s constantly shifting threat environment. Developer education must become tailored to the requirements of individuals, with techniques that address the latest vulnerability and attack trends.

That’s where the concept of agile learning enters the equation. Agile learning provides developers with multiple pathways to educate themselves. It focuses on “micro-burst” teaching sessions so teams learn, test and apply knowledge quickly and within the context of their real-life work. It adapts to different skill levels and training styles, while encouraging developers to immediately tie new lessons to real-life practices. We’ve found this approach provides teams with the best results — where these practices are implemented, we tend to see developer teams introducing fewer vulnerabilities into code and more frequently catching misconfigurations and code-level weaknesses — decreasing risk overall across the organization.

Based on your experience and success, what are the “5 Things To Keep in Mind When Deciding Where to Use AI and Where to Rely Only on Humans, and Why?” How have these 5 things impacted your work or your career?

1 . Is the outcome I’m working towards more tactical or strategic?

Can it be automated or does it require critical thinking?

Building a simple calculator is a lot different from navigating the compliant configuration of a payment gateway, so exercising discretion and resisting the temptation to outsource complex issues to AI tools is paramount.

2 . What stage in the project/development life cycle am I at? Do I need suggestions or a final product?

Do I need a jumping-off point to work with or do I need the most secure, polished result? If it is the latter, then this is the realm of a trained, security-aware human with experience.

AI is fine for nutting out some initial concepts, but everything must be fine-tuned and assessed for its suitability and security in the context of the overall project.

3 . What is my experience level and to what degree will the AI tool be assisting me?

Will it be a partner/sanity check or am I relying too heavily on AI to do the heavy lifting — am I fully trusting its results without being security aware myself?

Put simply, if you have low security awareness and little applied skill in secure coding best practices, there is a chance you can do significant damage at a speed and rate of productivity not previously seen. Working with AI coding assistants should be restricted until a baseline of security skills is proven beyond doubt.

4 . Does the language I’m working with have enough public data to reference in generating secure code, or will I need to have heightened awareness for the results because I am using a less popular language?
LLMs are only as good as their training data, and there is a huge margin for error. If you are working in an obscure language, similar to asking the question on Stack Overflow, there will be less information available and fewer developers able to assist.

For example, if you’re working for a financial institution with a lot of legacy systems, then it stands to reason you’ll be working with a lot of legacy code. Many financial institutions use an ancient programming language called COBOL, which has been used since the 1950s. There are very, very few experts around in 2024 to navigate COBOL issues, and despite its age, it is still susceptible to modern vulnerabilities. Don’t expect an LLM to be any better; blindly trusting output is a recipe for disaster.

5 . Do I trust this result is secure?

The answer should be no, as humans should handle big-picture tasks such as identifying and enforcing security best practices.

AI coding tools represent the future of software developers, but they are likely to always be assistive in nature. Experienced, security-aware developers with honed problem-solving skills will be in-demand, and more productive as the technology advances, but “set and forget” software builds would only be attempted by those who don’t care about quality or security.

Looking towards the future, in which areas of your business do you foresee AI making the most significant impact, and conversely, in which areas do you believe a human touch will remain indispensable?

When I think about how the role of developers will change with AI, and how developers are currently using it to generate code, I ultimately believe that, over time, developers will write less code themselves, but they will become more focused on software architecture. Over the next year or so, developers considered “average” or below will realize that they will be replaced unless they work on refining and upskilling. They will need to start assessing their situation in terms of skills and learning opportunities.

If I was a developer right now, I would learn about the things that AI cannot do, or is weakened by. This understanding brings a level of impact to an organization that is invaluable. AI tools can write average code, but the technology has weaknesses in terms of performance, security and privacy. The concept of the “average” developer is entering its pilot stage: In order for developers to stay relevant in the future of AI, they will need to learn to work with AI tools, understand their inherent weaknesses, and navigate them as the master “pilot” ultimately making decisions with a contextual understanding of how components are used. AI remediation itself is also in its pilot stages. That is, in addition to the tools that help write code, there are tools for code analysis to discover vulnerabilities within code.

Just as AI has weaknesses in generating code, we have yet to see AI that can catch every vulnerability or weakness within a codebase. We cannot blindly trust the output, with hallucination and false results still a leading concern when implementing its recommendations. Deciphering security best practices and spotting poor coding patterns — the type that can lead to exploitation — has emerged as a skill that developers must prioritize, which companies must invest in at the enterprise level. We cannot replace the critical “human touch perspective” which anticipates and defends against increasingly sophisticated attack techniques.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

In the context of the wider software industry, virtually every human on earth benefits from safer, higher quality software, and their sensitive data being protected.

However, I have a great passion for environmental causes, and one day, I would like to work on solving climate and environmental issues through technology. I explored this with solar-powered koala drinking stations during Australia’s horrific Black Summer bushfires in 2019, and it’s an area of the tech space that is exciting yet frequently overlooked in terms of funding and attention. I’d like to be involved in turning the tables on that sentiment in the future.

How can our readers further follow your work online?

This was very inspiring. Thank you so much for joining us!

About The Interviewer: Kieran Powell is the EVP of Channel V Media a New York City Public Relations agency with a global network of agency partners in over 30 countries. Kieran has advised more than 150 companies in the Technology, B2B, Retail and Financial sectors. Prior to taking over business operations at Channel V Media, Kieran held roles at Merrill Lynch, PwC and Ernst & Young. Get in touch with Kieran to discuss how marketing and public relations can be leveraged to achieve concrete business goals.

--

--

Kieran Powell, EVP of Channel V Media
Authority Magazine

Kieran is the EVP of Channel V Media, a Public Relations agency based in New York City with a global network of agency partners in over 30 countries.