Welcome back to What’s NEXT, a podcast from Samsung NEXT exploring the future of technology. In this episode, I talk with Edgeworx’s Farah Papaioannou and Kilton Hopkins about how edge computing works, and how their technology makes it easier to run applications at the edge.
Ryan Lawler: All right, so to start tell what is Edgeworx and what is it that you do?
Farah Papaioannou: We are an edge computing company and our belief is that we’re moving to a place where all of these devices that are becoming more and more intelligent and generating more and more data will ultimately become software platforms for the most innocuous devices, whether it be a refrigerator, your washing machine, a car, an oil pump will all become software platforms. And we’re gonna enable that future.
Ryan Lawler: So you are Edgeworx, you operate on the edge, so what is the edge? How do you define it and how is it different from the cloud or the fog, or all these other different terms people are coming up with?
Kilton Hopkins: In the early days of the internet and the World Wide Web the edge had a different meaning. The edge there’s like the edge of the cloud which is this is that that ISP building, right?
This is the AT&T, uh,local branch office where all of the local computers would dial in and then would be connected to the internet at large and that would be something that used to be called an edge. And um now the definition of the edge is kind of a little bit more like this: You have the, you know the cloud or the the internet center and you have kind of the edge of the cloud and then you get into the physical world and this is the edge proper.
Fog here is part of the edge but fog just means you don’t really know where at the edge is running. So maybe anywhere in the building I want to run these analytics. Okay, great. That’s a great fog use case. And it’s kind of like fog is like if the cloudish stuff were running in the physical world like so down at the ground level. That’s why you know fog, right?
But all of that is edge and then edge goes all the way down to the generation of the data. So it could be the the fitness band you’re wearing on your wrist.
Ryan Lawler: Right. So the way that you think about the edge is literally the final device that can be used to process some amount of information.
Kilton Hopkins: Yep physical things. Things that are present in some determined location or geography and whether or not they’re the end of the chain is not so important, it’s just that they’re part of that physical makeup and that network of things that is actually has a location.
Ryan Lawler: Okay cool. So you’re both interested in edge computing and IoT, what led you to that place? Where did the interest come from, or how did you get started?
Kilton Hopkins: Well, it’s it’s an absolute necessity.
When the Internet of Things began to rise I had already spent a couple of years from like 2010 onward looking at it and then in 2014 edge computing was just like self evident to me. And so I came across it because um, I couldn’t see any other way that we would possibly handle what we were trying to do with internet of things and then I realized it’s so much more than the internet of things.
I mean edge computing is essentially, um, all the best stuff we’ve learned from the last 20 years brought down to the rising amount of computing power that’s [on] every device. So, for me I just I came across it the way that you know, I came across the cloud in the early, you know to mid-2000s and the way it came across mobile in the the late part of that first decade and stuff is just like it was came in front of me.
And I said this is this is just has to be the next thing to focus on.
Farah Papaioannou: I think it was a similar kind of experience for me, but I focus on it from a marketing standpoint as opposed a tech standpoint. It just seemed like a no-brainer in the way that everything was becoming intelligent, everything was becoming smart. I try to explain what I do to my mom, autonomous cars she understands, and I have a newborn who I don’t believe will ever have a drivers car because autonomous cars will be so ubiquitous.
So imagine if a car comes to a four way stop, how do you determine what goes first? If we had to take all that data that’s generated, I mean these cars are not basically computers on wheels, if we take all that data that’s generated to try and solve this problem and try and shift it back to some cloud in order to get some sort of processing and some sort of answer there’s no way that would work, right?
There’s latency issues, bandwidth issues … Kilton likes to say if every house in San Jose turned on a NEST camera all at once it would break the Internet. And I’m not talking about the way Kim Kardashian’s butt did, but I mean really break it and come crashing down. So on top of … even if you could solve those things then there’s cost. So it just seemed like a no-brainer that this is what’s gonna have to happen in order to enable these smart devices to really do what people want them to do or think they can do.
Ryan Lawler: So what are the enabling factors that are converging to make edge computing not only possible but necessary, as you said, let’s talk about what that involves.
Kilton Hopkins: Well one is the ever-decreasing cost of of good quality compute. So the amount of computing power you can get for a few dollars now is incredible compared to what it was, you know, 10, 20 years ago. And also power consumption, so we can run a lot more powerful things off of batteries and off of solar. So first we learned how to work with data.
So data science was very very specialized field for a long time until then and we had an explosion and you know during the big data era we realized that there’s all kinds of uses for data, if only we had enough of it. And so then given that we now know that we can do great things with data, uh, there’s more demand for more data and that began the rise of the kind of the internet of things scramble, which is like get sensors out there so we can start learning so we can see if maybe we could predict failure of that machine or increase efficiency.
And as soon as you have all of this data being generated, you have the necessity of edge computing because if you take a look at the amount that will be generated over the next five to ten years it hits a point pretty rapidly in which we can’t handle it with the internet flow that we have nor should you because a lot of it needs to be processed very very quickly like on the latency level of microseconds.
So all of that comes from the fact that we can work with data and then that came from the fact that we actually had some tools and and things so, um, those are some of the trends that have put us in squarely where we are today.
Ryan Lawler: So how does Edgeworx actually work, or how does the technology work? And what are you actually providing?
Kilton Hopkins: Every device that’s out there we give a what’s called a piece of agent software. And what that does is it sits on top of the operating system and um it takes over looking at what’s running and what should be running. And um this uh the software it abstracts the hardware.
And then exposes a containerized runtime environment, so allowing you to operate any code that been packaged. In addition, it also provides some basic services. So think like Amazon Web Services has like S3 buckets, right? And that’s everybody who works with AWS knows about this and this is how you do unstructured file storage.
I want to store a picture. I want to store whatever you just use S3. So same thing for the edge. So we provide in that software agent a couple of you can call them like hooks into building for the edge or running at the edge. So one of those hooks is that all of your data just looks local. How we do that is we make a network between all of the devices that are running the agent and that have been authorized into your your edge environment.
And so once installed that piece of agent software just that compute pretty much becomes an edge software platform and uh, then you can manage it through the same way. They kind of all look the same and so on and yeah, that’s that’s the gist of how it works like in a nutshell.
Farah Papaioannou: And to take all that tech and repackage it into something that a lay person can maybe understand is what we try to do for these devices is what basically android did for mobile. Before android came along each one of these phone devices were their own software platforms, hardware, software were all tied into one. Blackberry for example owned their own applications and no one could write for them. And whatever Blackberry came out with that’s all you can use.
Well Android came along and turned this phone from just a communication device into a software platform where people could write anything from Angry Birds, to weight loss apps to Quicken for phones to Fortnite for my son, any sort of application they wanted to write. So it really opened it out beyond just communication and also opened it out to make it easy for all developers to write for this.
So this mobile device is now a software platform. It’s basically an edge device that people carry around with them. Well what we can do is we can do that, same sort of thing, but expand it beyond mobile, into again cars, oil pumps, refrigerators, any device that has any minimum level of compute, you can turn that into a software platform. We can think of ourselves almost like android for the edge.
Ryan Lawler: Okay. When you think about all of these use cases and a lot of the things that you’ve talked about, a lot of it’s consumer facing, and what I’m wondering is for those consumer facing devices, how many of them actually want to be more open to becoming more platform like or able to run other applications. Is there actually a desire for that from the makers of these devices?
Farah Papaioannou: Absolutely. While we … you can use our software consumer facing we’re seeing most of our traction right now in industrial. We’ve got some telcos, advanced manufacturing, oil and gas, autonomous vehicles… So the way to think about it is they would like to enable their devices to become software platforms.
Ford, for example, they would like to be able to do AI on their cars, they would like to be able to do real time image processing on their cars, and they would like it such that … right now the way they solve this, for example, Ford may have their own particular SDK that a handful of developers at Ford knows how to write for. And so maybe three developers can write against the SDK.
What would be better is if everyone could write for that SDK so then they would have access to a plethora of apps, and then you could take that same AI that you write for a car and you could take that and could write for a smart camera, or you could write it for an oil pump or you could write it for on the factory floor because the algorithm itself doesn’t change, it’s just the application of it.
One of the applications that we currently have right now is a way to do model training at the edge. Well that’s not specific to any one use case. And I’m guaranteeing that everybody would like to be able to port that application if they could. And it would be great for a developer to be able to write that for let’s say take that app and be able to run it on multiple different devices, right? The Angry Birds guy doesn’t want to be able to run it on three different phones, he wants to be able to run it on everybody’s phone, that’s where the developer can get the scale as well.
There’s actually a really nice marriage between the two in allowing us to open that out.
Kilton Hopkins: And uh, the thing to also note here is that having our edge computing technology embedded in the device doesn’t necessarily mean that the use cases like to use third-party apps or like an open app platform.
It might just be that the primary use is that the manufacturer is able to deploy upgrades to their own product. They no longer have to write that infrastructure for themselves. Or they want to manage the product remotely and see how many washing machines have come online, or how many oil pumps are currently not running the software they’re supposed to be running and what to do about it. So, that’s all stuff they’d have to figure out without us.
Ryan Lawler: Okay. When you talk about all these use cases, a few things come to mind for me, and they are limitations in compute, because a lot of these edge devices are running on pretty low powered processors and also connectivity. So, tell me what you’re seeing around both of those cases and how much compute is needed for your platform to run. How do you even get it on these devices to begin with.
Kilton Hopkins: Sure. We keep a very low set of requirements. In fact, for us a compute platform is a target to run our technology, if it can run arbitrary code and do basic multi-threading. What that looks like in terms of metrics, think like a dual-core 32-bit ARM processor, so the type of processor that costs you a couple of dollars, and 256 megs of ram. That’s a suitable platform for doing some decent edge computing. With that you can probably run about four edge applications of moderate size before you would tap out.
Ideally, if you’re going to run something that involves some AI or some data analytics and so on, you want to get more like 512 megs of ram, or a gig of ram and a quad core processor’s great, 64-bit is great. That’s like the qualifications of a Raspberry Pi 3, you know, is one gig of ram and quad-core 64-bit.
…And how you get it on there, you can either manufacture the device with it’s embedded OS, and just have part of the binaries include the edge works software, or you can put it on aftermarket. Meaning the module is already done, maybe even already has an OS, and then you can install it afterward. There’s a lot of flexible ways to put it there.
The way that you embed it with some security, you might prefer to do it at manufacture time. Right? So that way, basically nobody has access to the root binaries or something. So, that’d be one reason to do it and flash it in the board like as it’s being made.
Ryan Lawler: Okay. That makes sense. What about the connectivity piece?
Kilton Hopkins: All of our stuff was designed to handle spotty connectivity. That’s one of the things that makes the edge so different from the cloud or the data center. Those environments have really high reliable connectivity, in fact, if you lost connectivity from your servers to the rest of the system, you’d basically be out of a data center or out of a cloud node. At the Edge that happens all the time. So our software does the handling and the buffering. It’s called store and forward, for messages that couldn’t get across because the cellular link was down and so on.
The whole point to edge is to use the connectivity to get the instructions that you need or the software that you need and then do what you have to do offline. Not need to be constantly connected in order to operate.
You really want to choose hardware based on your use case. If you’re going to do real time, 20 frames per second plus image recognition, you’re going to want some GPU or AI specific chips that are right there at the edge. If all you really need to do is filter data based on some simple criteria and then forward it on, if it matches a pattern, that’s the sort of thing you can run that bare, bare minimum. Right? Just some simple filtering intelligence.
Ryan Lawler: How much of this is applicable to new devices in which you’re installed at the manufacturing point, and how much of it is you applying the technology that you have to devices already in market and bringing new use cases to them.
Kilton Hopkins: That’s a great comment. So let’s talk for a moment about the topology that now includes the edge. At the very, very bottom is the devices that sounds more like what you’re talking about is, this a wearable that takes my heart rate. This is the controller for the motor speed on an assembly line kind of thing.
There are so many devices out there — and you know legacy devices — that need to be integrated and uh because of that you’ll never have an agent software or some SDK that you’re going to put on every device in in a in a solution and so realizing that our stuff best runs one layer up from the actual sources of data themselves. That’s either going to be an IOT gateway or it’s going to be like a WiFi router, or it’s going to be in the trunk of the car, it’s going to be in the main computer. It’s not going to be the tire sensing module, it’s going to be the thing that, that tire sensing module talks to.
That’s for a couple of different reasons. At the very bottom layer everything is siblings. So what is it? The temperature sensor that should do the processing for it and the pressure sensor. You know sibling nodes are all on the same level. So what you really want is you want one layer above that to be the place where you start to collect and aggregate.
The layer right above the devices is the first place where you can aggregate together. It’s interesting because it’s also you’re last chance to translate wireless signals, or to translate protocols or to add some context before you start to lose your grip on this whole mess that is the diverse range of sensing data and so on.
Farah Papaioannou: Also, these devices tend to be very commoditized, they tend be dumber devices. People would rather have a thousand sensors that can gather more data than have ten really, really smart expensive sensors. Allowing as you said, above that, allows them to continue to generate more and more commodity level sensors without interfering with that.
Ryan Lawler: Okay, so how does that fit with the idea of different nodes connecting to each other and creating a kind of mesh network of devices. Can you talk about that, and how that fits into what you’ve built.
Kilton Hopkins: Yeah, sure. So there’s meshes that are used for reliable data transmission. So this would be like Bluetooth 5 mesh or Zigbee mesh, right? The whole point there is that if some of the nodes go down, the data still gets through because it finds another path. That’s this redundancy effect of just networking. We have a software mesh network which works kind of the same way in that if there’s some of the nodes that are currently not connected, there are still other paths for getting things through.
All of the networks that you use to connect things, like LoRaWAN or WiFi or Zigbee or cellular or what not, all of them work really great with our technology because how you transmit the stuff is just kind of the carrier. We believe that in the future connectivity will be increasingly commoditized. People will say, “Oh, so you got me, you sold me on a cellular contract for this oil field.” Then over here we have a satellite connection, and over here we use LoRaWAN. You don’t care, because the software that you run, edge work stuff, runs across all of them equally the same.
Ryan Lawler: And how do you or your clients figure out when it makes sense to do the compute locally on the device. You know on the edge versus sending data back to the cloud for processing.
Farah Papaioannou: I think a couple of factors drive their decisions. One is the speed in which they want a response. In some cases they want to take action immediately right, in the case we talked about, in the image recognition. If they’re trying to do target tracking, they want to know right away when they’ve found a target they want to track and to be able to take some sort of action. Whether it be notify a first responder or let the subsequent [inaudible 00:18:22] next door to say, “Kate you need to get a hold of this guy because he’s coming in your direction.” If you had to take all that time and wait for a cloud to do the processing, you know, that wouldn’t make sense.
Another is security that determines this. Take the example of a hospital. Once you match patient information you can’t send data outside to the cloud because of HIPAA compliance and what if that data is hacked. It has to stay within the four walls. Well, edge computing enables people to do some real cool processing without necessarily sending it back to some sort of cloud. We talked about bandwidth earlier as well and cost. These are some of the real issues that we’ve seen that drive the decision making.
One of the things that we’re really big on is that the cloud isn’t this end all be all sort of thing. I think that’s pretty controversial of a thing to say in this day and age. I mean, everyone thinks the cloud is “it.” We kind of think of the cloud sort of like a ball and chain in some way. It hinders a lot of the things you can do. This company who wants to do this real time processing in terms of image tracking, if they had to send up data to a cloud, it just wouldn’t work. That use case would fall flat on it’s face. It’s only because we can do edge processing that they can actually pursue this.
Then there’s like the cloud is pretty slow as well. If speed is a factor, this is also going to be a limiting thing for you. It’s not as secure. We all know that. We’re aware of that. We’ve sort of turned a blind eye to the security constraints because of the ease of use of the cloud, but it’s a big deal for a lot of use cases. Now because we can do edge processing we can now enable these use cases in a way we couldn’t before.
Kilton Hopkins: Don’t forget about the cost of your cloud and chain that you wear on your ankle. Once they’ve got your data, it seems that everything you do with it has a cost associated with it. Maybe you don’t want to give them everything you got.
Ryan Lawler: Right. So one of the other considerations here is this idea that data is the new oil and all the large data processors, the cloud processors, whether it’s Amazon or google, who currently have the most data now, are seen as taking the most advantage of it. When that processing moves the edge, what effect does that have?
Kilton Hopkins: This is kind of like what our battle cry is. We say “Bring your own edge.” If you get rid of your tethering and your reliance on the cloud, you get to choose where your data goes and what to do with it.
So take a factory that has been making the same product for the last, I don’t know, sixty years or so. They currently have operational technology, let’s abbreviate it, OT, probably around optimizing their operations, maybe the manufacturing flow, things like packaging, and then shipping and warehousing. All of that is a closed system, and it’s all there to run their factory.
Every company you talk to is going to talk about digital transformation, right? Especially the bigger companies are saying, “We need to move from being a physical goods company to being a digitized company, even though we still make physical goods.” Now, what if they go through that transformation and all of their data now is held in someone’s cloud system, and being processed by Google, or an Amazon or whatever.
Farah Papaioannou: Held hostage by them.
Kilton Hopkins: They are held hostage. They’ve basically given over their company’s next biggest asset, which is their data to somebody else. They don’t want to do it. That’s some of the reasons why there has been slow uptake of these industrial IoT platforms, because they require that the data go into some central cloud and companies don’t want to do that.
They want to still be viable businesses in the digital age. So, you take the processing and put it to the edge and you put the power back in the hands of those that create and own the data and that’s a really important point for us. That’s one of the reasons our customers love it.
On the data, and that’s a really important point for us. That’s one of the reasons our customers love us, is they don’t end up giving away the farm.
Ryan Lawler: Okay. Let’s talk about customers and how they come to find out about Edgeworx. So it’s my understanding, you have an open source component, and then is it the case where someone plays around with the open source part and then they decide to pay you for other services or how does it work?
Farah Papaioannou: Yep, we are actually open source under the Eclipse Foundation. We open sourced in May of 2016 and within a couple of hours Eclipse asked if we would like to come up under their umbrella? Which, of course, is a no-brainer and given that they are taking the charge on industrial IoT arts, just IoT in general. So we’re happy to be under the Eclipse Foundation and all of our customers have come in through them.
It’s a really great way to market to people because it’s not like I have to convince them, “Hey, Edge is great. Here’s the reasons why.” These people have already have a used case, they’re trying to solve a problem. They realize the cloud isn’t gonna work and more often than not, they’ve already been through the Microsoft, the Amazons, C3, the Resins, they’ve tried their own, Cooper Daddy’s at the Edge and they realize, “You know what? This is hard.” So then they come to the Eclipse Foundation, they find us out and they say, “Help.” And we don’t have to say, well this is how we’re better than all these guys, cause we’ve already done it.
They’ve already done it for themselves. They’ve done the analysis and so they try it. Sometimes they don’t even try it and they just hear what we can do and they say help. Then we show them what we can do. Pretty shortly now you can actually go to the Eclipse Foundation and you can download our stuff and start working with it today. Within short order here, we are going to have a hosted cloud, sorry not a hosted cloud-
Kilton Hopkins: PAAS, so Platform As A Service.
Farah Papaioannou: PAAS, Platform As A Service, where you can bring your edge and get started with our software from our website www.edgeworks.io.
So there’s two ways in which you can start engaging, obviously if you do it through Edgeworx, you have the UI, the management, the orchestration, some of the things that are not available on the open source, but the open source engine is really strong, so if they want to get started that way they can there too.
Ryan Lawler: Okay, you’re talking about large industrial corporations or operations that have very specific needs and as a young, small early stage start-up, how do you find the resources to figure out which problems you try to tackle or which customers you can work with on a case by case basis? Because I figure you probably have to allocate resources very mindfully.
Farah Papaioannou: Well the beauty about what we’re doing is that we don’t necessarily have to pick and choose, right? Because we’re a software platform, developers, all of these companies have developers. They already know how to write software. They already know what the use cases that they want to write for are, they just want to find a way to deliver that to the Edge and so we solve that problem.
Android doesn’t have to worry about working with the Angry Birds guy to figure out how to write, pick and choose which app they want to run, they provide the tools for any developer to write apps.
So for example in oil and gas we have customers, they know the algorithms they want to run. They know how to increase yield. They know how to do certain things to bring down costs. They’ve been running these algorithms in the cloud for a long period of time and now they want to be able to do it on the Edge, so we provide them the mechanism to do that. They download our platform on to these devices and now they can deploy the apps that they run right there on the Edge. They can add new apps on the edge. They can roll back to previous apps. They can run apps that they’ve already developed. They can write custom apps for use cases.
So that’s actually one of the things that we differentiate from other people in the Edge competing in the IoT space. Big companies like we saw for this use case only. In order to do that you’re going to have to be subject matter expertise and not one use case. I think that’s a very small problem to solve for and we’re trying to solve for a big problem here.
Kilton Hopkins: And that’s what we want to do is, we want to enable people to build applications for the Edge regardless of their use case, using the same types of tooling that they use to build for cloud and have it just work. That’s what the world needs.
Farah Papaioannou: Yeah and because we’re on containers at the edge people can write using the languages they already know how to write, doesn’t matter if it’s C, C Sharp, Java, Python, they can use tools they already know how to use. They can run software that they already have been running in their clouds, they can just containerize it and deploy it to the Edge. They can take best in class offer, like for example, we could take tensorflow, we can wrap it’s entire neural net in under an hour and push it to the Edge. So one of the beauties of what we’re trying to do is, you don’t need to rewrite software, you can take existing software as well.
Ryan Lawler: Okay. What’s one controversial opinion that each of you holds pretty strongly?
Farah Papaioannou: For me it was the idea that cloud isn’t going to be this end all solution to everything and that there are a lot of limitations to the cloud that people sort of turn a blind eye to that, where we envision a future where you can actually run really meaningful technology, really meaningful solutions in infrastructure that don’t necessarily involve the cloud. Today that would be sort of heresy to think about.
Kilton Hopkins: And for me, all of the stuff that we’ve come to trust, you know SSL Certificates signed by certificate authority and so on, their tools for securing our future computing era, but there not sufficient and we really have to rethink the way that we do security now that we’re actually putting devices out that can impact the physical world.
If you have a bug in your data or you get hacked and the database gets wiped out, you have backups, but you can’t back up a human heart. You can’t back up 500 pounds of steel that got incorrectly stamped because somebody hacked the system, so now that we’re impacting the physical world, and think about autonomous vehicles, right? You’re four way stop example, you can’t restore a backup of a crashed car and the life that was inside it. So we need to think about things very different and so my kind of controversial opinion is, everything that we know is not enough.
Ryan Lawler: Cool. Let’s say Edgeworx becomes ubiquitous, how does that change society? How does that change the way that people think about computing?
Farah Papaioannou: I think once Edgeworx becomes ubiquitous and I believe it will, I think everyone thinks about every device is now is a software platform and a computing platform and I know it’s hard to wrap your head around it today because we only think of these consumer apps, but there’s so many things you can do like if you take real time video imaging, you can do more than just detect from a camera.
You know if you’re running in a car you can use it to detect certain things. You can detect in smart agriculture, have the leaves changed color? Do they need to be harvested? With a refrigerator, do I have all the groceries that I need? Have things started to mold or go bad? I mean if you’re able to take that app and port it everywhere, all of a sudden that’s pretty interesting, a type of person who would want to write that and see that applied in a lot of different ways is pretty interesting.
So I think we see a world where we have these devices that are just akin to what we have from mobile devices today. Those things are everywhere and they do all these different things that you wouldn’t even think of.
I mean if someone told me five years ago that I would want to carry my entire picture collection with me at all times … I’d want to have an encyclopedia with me at all times … I’d want to be able to access the internet at all times … I’d want to be able to watch videos with me at all times … I’d be like, psst, I’m not going to lug my TV and my DVD collection and my every picture I’ve ever taken in the trunk of my car and leave the house with it, and today I can’t imagine not having those things with me.
I hardly even use my phone just to call people anymore, it’s to have all this other stuff and so I think we’re just trying … I don’t think our mind has even started to grasp the different types of applications that we’re going to want to have at some point in the future, but Edgeworx is going to be running everywhere and we’re going to enable that to happen.
I think that is really cool because now it’s just the imagination of all these developers that it’s going to take hold and see what they can come up with.
Kilton Hopkins: So when Edgeworx is ubiquitous, the question of how to wrangle and secure and protect data, so the question of data provenance is solved. Discussion over. There’s no more conversation around, well once I pass it AWS is it technically, does it need to be managed by them? Who’s responsible for compliances? Is it a compliant cloud? What happens if it gets copied along the way?
What happens if some application sends a copy of it outside of the boundaries that we wanted to keep the application within? These questions are over because that’s one of the things that we focus on and I just see that there’s endless architecture conversations, and really what we need is endless infrastructure that is just everywhere. And if you’re using that common infrastructure that everyone else is using, you can guarantee that if you didn’t authorize that data to go from point A to point B, it ain’t going nowhere. And that just closes that.
I would love to see that conversation over because we’re entering into an era where if you lose your data, you lose your value and if somebody gets ahold of your data, they can be you. So we need protections around this.
Ryan Lawler: Yeah the security aspect of it, is probably for me the most attractive part. Especially when you think, I mean we’ve talked a lot about industrial applications and use cases but when you talk about the consumer use case and all these devices and the malware that just randomly appears on these things, it’s pretty scary.
Kilton Hopkins: Yeah, totally.
Ryan Lawler: Cool. Well thanks for being here, thanks for joining us for the podcast.
Kilton Hopkins: Yeah thanks so much for having us.
Farah Papaioannou: Thank you for having us.
Originally published at https://samsungnext.com on September 20, 2018.