Joseph Jones of Bosco Legal Services & Investigations On How AI is Making Embedded Devices More Vulnerable to Cyberattacks

An Interview With David Leichner

David Leichner, CMO at Cybellum
Authority Magazine
12 min readOct 19, 2023

--

AI can intentionally and unintentionally lie.

Data poisoning attacks involve the manipulation of the trues that AI is exposed to. Untrue statements or facts can trick AI into believing this to be true and executing plans based on this information. On reddit people have shared how they have gotten AI devices taught to say, indorse or spread morally objective theories.

As the world becomes increasingly connected, the use of embedded devices in everyday applications — from smart home systems to industrial control units — is skyrocketing. While this advancement offers unprecedented convenience and efficiency, it also presents a growing security challenge. With the integration of Artificial Intelligence into these devices, the complexity and the potential vulnerability increase manifold. Cybercriminals are exploiting the intersection of AI and embedded devices, posing serious risks to data privacy, system integrity, and even public safety. How are these technologies making embedded systems more susceptible to cyberattacks? What are the vulnerabilities that AI introduces? And most importantly, how can we mitigate these risks while still benefiting from the advancements AI brings? As a part of this series, we had the pleasure of interviewing Joseph Jones.

Joseph Jones is the President of Bosco Legal and Investigation Services, and he is a licensed and certified expert in investigations with over 2,000 hours of specialized training in conducting cyber investigations. At a young age, Joseph got his start by working at his father’s private eye business where he worked every position in the company, from sweeping the parking lots to taking out the trash. After completing advance degrees in Behavioral Studies, Psychology and obtaining certifications in cyber investigations, Joseph created the Cyber Investigations Department. Today, Joseph leads a staff consisting of former FBI, law enforcement, military intelligence and other intelligence professionals, and he currently serves on the governing board for an agency that trains government operatives, CIA, FBI.

Thank you so much for joining us in this interview series! Before we dig in, our readers would like to get to know you. Can you tell us a bit about how you grew up?

I grew up in sunny California to a father who was a private investigator and who started his company in the dining room of our small family home. I had a unique childhood, where I grew up hearing his stories, which was exciting for any kid to hear. The life of a private eye is so different from what you see on the tv.

Is there a particular story that inspired you to pursue a career in this field? We’d love to hear it.

Interestingly enough, it was more of a practical thing. Even though I grew up around investigations, I didn’t really plan on becoming involved with it long term because I wanted to grow up to be a social worker. However, in my third year of college, my dreams of pursuing this kind of work came to a halt. Around 2008 there was a recession and there was a massive hiring freeze for the kind of jobs that I wanted. At that point in time, I had met and married my now wife, and I had to think about supporting my growing family. So, I shifted from viewing the family business as something I did in the summers, to it becoming my long term career. In the back of my mind, I knew that I wanted to forge my own path in this industry.

Can you share the most interesting story that happened to you since you began this fascinating career?

Truly there are two cases that really opened my eyes and helped shaped my view of the cyber world. Back in 2008, one of my first cases was a personal injury case, where someone was suing the hospital claiming that the doctor had significantly damaged their leg to the point that they could barely walk. The defense had us doing surveillance on the guy which wasn’t showing much. Surveillance is limited in many ways, a lot of the time we don’t get much, most of the stuff is watching someone get in and out of a car. I remember this case very clearly, because social media was so new and no one had really thought about using it on their case. In this case, because of the guy’s age I was pretty sure he didn’t have any online accounts, and once I went in, I found that he didn’t, however his wife and son-in-law did. I was able to go in and get game-changing evidence showing that the guy was over inflating his injuries, by showing him being able to do things that he had claimed was impossible. In a handful of hours, we were able to develop even better information than we could have from surveillance. It was at this moment, that is became clear to me that the cyber world was going to make a giant impact in the investigation field.

There was another interesting case from the early 2000s, back when the internet was relatively new, involving a widower and a romance scam. The case starts with the lady reaching out to us, after her attorney had become suspicious of her spend a significant amount of money on a man she had met online. It appears the lady had been gifting computers, money and all sorts of gifts to a man she had met online. When I started to dig into this case, I found that unfortunately the man had fabricated his identity and was chatting with her all the way from Africa. When we reveal our findings to the lady, she was absolutely crushed and could image that the person who was talking to was a fake. Interestingly enough, I found out that she decided to create her own fake account and catfished him right back. It was this case that really showed me that fake accounts were not only successful at scamming people of their money, but that they were only going to get more sophisticated. Today, AI has really become a useful tool in generating more believable fake accounts and even responding to messages, that it is becoming harder to distinguish truth from the auto generated.

Are you working on any exciting new projects now? How do you think that will help people?

Cyberattacks and subsequently cyber investigations, is so dang expensive. It is great that we are able to help people, but for every one person we can help, there are a hundred that we cannot help, primarily because of the cost. It takes a high level of expertise and a ton of man hours to combat cyber related attacks. One of the things we have done a fair amount of, is creating online resources for people who are victims of online harassment or cyber-attacks by providing tools and guidance on what they can do outside of hiring a professional. I am currently working on some really high profile cyber related cases at the moment, but I cannot talk about them publicly until they are resolved.

Ok, thank you. Let’s now move on to our main topic. Could you briefly explain to our readers the key reasons why the integration of AI into embedded devices might be increasing their vulnerability to cyberattacks?

Back in the days all that was needed to have a safe home, was to make sure the doors were locked at night, the windows closed and the stove of. Nowadays, having a safe home also means making sure your home network isn’t susceptible to a cyberattack. Of course, your laptop has a firewall, your router comes equipped with defenses, but have you stopped to consider that your new smart lights have allowed for a gap in your defense? Embedded devices can help make life easier, timely and efficient, but many of these embedded devices lack the robust cyber security settings that keep hackers away. Point blank, any device connected to the internet can be hacked. A hacker’s wish to is break in, an embedded device, more so an AI device, is their ticket to getting in and out before you have time to react.

Adding AI into this equation is a two headed sword. While AI can help fill in the “smart” gaps of our current embedded devices, bring everything together to align with our routines such a preparing the coffee and opening the blinds, this collector of information is allowing hackers to gain more personal information than ever before. AI increases the risk of cyberattacks because there is more value in hacking your devices than ever before.

Can you elaborate on the specific types of cyberattacks or vulnerabilities that AI-enabled embedded devices are particularly susceptible to, perhaps providing a recent example or case study that caught your attention?

Traditional cyberattacks on embedded devices, involve taking advantage of limited to inexistant defenses, which can be corrected by developers. Cyberattacks on AI embedded devices are different in that hackers get in and take over by adapting existing AI algorithms. With AI always changing and being so personalized, it is impossible for any developer to predict all kids of scenarios that could manipulate someone’s AI device. Something has that caught my eye is how Microsoft is expanding its use of AI into Outlook mail. This extension offers features such as email tracking and scheduling, predictive text, task management, and more. With emails regularly being phished, AI embedded systems seem like a gateway to our emails. The incentives for hackers to trick your AI outlook into allowing it in, is going to be something interesting to look out for in the near future.

How do you believe the inherent nature of AI, with its data processing and learning capabilities, might be unintentionally expanding the threat surface for embedded devices? And how does this compare to traditional embedded systems without AI?

Hackers “poison” AI enabled embedded technologies with false information, manipulating the mode’s training dataset, and yes/no responses, that impact linked devices. Image one of your AI devices gets hacked and gets programed to turn on the kitchen oven whenever you ask to start the coffee machine. At the moment AI is making embedded system more susceptible to cyberattack because the devices are so eager to follow instructions, that the instructions have been known to override their built-in safeguards.

Embedded devices with AI capabilities undoubtedly offer a range of advanced functionalities and benefits. How do you think organizations can strike a balance between harnessing these advantages and ensuring robust cybersecurity measures?

In addition to the AI embedded devices in our homes, these devices are making a mark in larger organizations and companies. AI is definitely revolutionizing the customer service experience, health care and banking. While once you could chat with a service representative to help you deal with your issue, now we are being guided to an answer via AI chat boxes. With the intelligence and information rich nature of AI, customers are able to receive quicker answers to their inquiries, while only escalating to a human representative if need be. Business are certainty benefiting from the use of AI. The big disruption with the use of AI in customer service is of course the expectations of the customer, while once they had to wait for hours for a representative to provide an answer, now they must push buttons, answer AI generated questions and are guided to an answer that best bits their problem. A good balance between the need, use and ability of incorporating AI devices like chat boxes, is keeping is on a tight lease. On one hand, it is important for AI to absorb and learn, on the other hand it is still vulnerable enough that it can be corrupted and poisoned.

As Industry 4.0 and smart factories gain traction, how are strategies and approaches evolving to foster product security within the supply chain?

One major point is that many “smart factories” are growing so fast that networks are being hastily, creating an opportunity for hackers to get it. A recommendation is for these AI facilities to understand that the threat of cyberattacks is real, that ransomware is actively a problem and that they need to be prepared for the worst. The ecosystem that this “Fourth Industrial Revolution” will certainly be something to watch, as we see AI help bridge the gap between operations, safety, compliance and quality control. At the moment, it appears that the industry is following the steps are other online business have done, including cyber training for employees, install firefalls, and encrypting data, in the hopes of preventing cyber-attacks.

Here is the main question of our interview. What are your ‘Top 5 Things Everyone Should Know About the Vulnerabilities of AI-Enabled Embedded Devices?

1 . Any device connected to the internet can be hacked.

By applying common research techniques, reverse engineering, hackers can take advantage of flaws in any embedded devices. The general rule in security is that everything connected to the internet can hacked, the trick is making it so time consuming and not worthwhile for hackers to do so.

2 . There are limited ways to protect AI devices.

Unlike traditional embedded devices wherein hackers rely on human error, AI devices can be hacked by manipulating AI algorisms themselves.

3 . AI can intentionally and unintentionally lie.

Data poisoning attacks involve the manipulation of the trues that AI is exposed to. Untrue statements or facts can trick AI into believing this to be true and executing plans based on this information. On reddit people have shared how they have gotten AI devices taught to say, indorse or spread morally objective theories.

4 . AI embedded devices are more attractive to hackers than your other devices.

Why mess with your lights, when a hacker can jail break your AI device and trick it into divulging your billing details.

5 . AI embedded devices are young, there is still a lot of growing they needed to do.

Companies are rapidly trying to mitigate misuse of AI, and training it on how to keep its safeguard up. The problem is that AI is mean to learn and be a personalized experience, at this moment is can easily be tricked. Yet companies are jumping to incorporate it into their chatbots and other platforms. AI can be great, but it still needs to grow up.

Let’s take a minute to talk about the future. Given the rapid evolution of AI and its implementation in embedded systems, where do you see this vulnerability trajectory heading in the next 5–10 years? Are there any emerging technologies or practices that might mitigate these risks or potentially exacerbate them?

I am pretty optimistic about the future; in the next 5–10 years I think we will have a better grasp of the role that AI can play in our daily lives. When you look back, it took 5–10 years before social media ended up being a key component of daily life. While I am hopeful of the role AI embedded devices take in our future, I just want to remind people to take it slow. AI can be great in business and various other fields, but because the technology is so new, it is vulnerable to bad actors. For the time being, I encourage users to follow the steps that we currently following with staying safe from cyber-attacks, which are changing passwords, keeping networks encrypted and updating algorisms to learn to protect itself from hackers.

You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

Currently in the United States we have a foster system where many kids age out of the system without many of the skills that they need to have successful life. If I could inspire a movement, it would be to encourage people to give foster kids a chance, internships, advice, skills and jobs. I work with a law firm that represents the interests of abused children and at the end of the day, we do a lot of these case because of my passion for social work. As a social worker turned investigator, I find a lot of pride in being to help out these kids.

How can our readers further follow your work online?

Check out our website blog for updates on our cases and for resources on what people can do if they have been victims of cyber related crimes, we are more than happy to point them in the right direction.

This was very inspiring and informative. Thank you so much for the time you spent on this interview!

About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is a member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.

--

--

David Leichner, CMO at Cybellum
Authority Magazine

David Leichner is a veteran of the high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications