What makes edge hardware “edge worthy”?

Hannah Mellow
sunlight.io
Published in
5 min readJul 20, 2023

I recently listened in to an energetic discussion on the topic of edge hardware, between Freeform Dynamics analyst, Bryan Betts, and Sunlight Founder and CTO, Julian Chesterfield. Their conversation centred on the questions: what counts as “edge hardware” and what is it that makes it “edge worthy”?

As the discussion progressed, the answer became clear, but of course raised a few more questions along the way. For example, first we need to define what we mean by edge, as edge computing can mean different things to different people. A usefully broad definition comes from TechTarget (2021) which says that “edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself.”

To help with the hardware question, we can then refine this by categorizing edge computing into three classes: near edge, far edge, and extreme edge. The types of edge hardware on the market reflect these categories.

  • Near edge: These systems tend to be rackable like a data center but more ruggedized, with a cut down specification and the ability to operate ‘further out’. Lenovo, Dell and HPE have some very powerful systems that are designed for this space.
  • Far edge: These systems are designed to run outside data center-like environments. They need to be air cooled, shock proof, moisture resistant and switchless. You’re more likely to see these wall-mounted than in a rack. The majority of systems in this space have an Intel chip, including Atom, i3, i5, i7 and embedded Xeon. Lenovo, Supermicro and Advantech have some of the best ranges for the far edge.
  • Extreme edge: The hardware at the extreme edge tends to be for low power and single-use function devices and capable of a certain amount of general compute. Mainly collecting data from IoT sensors. Examples include Raspberry Pi, NVIDIA Jetson boards with an embedded GPU, and small system-on-chip boards that are custom built with Arm architecture. You don’t tend to see many Intel-based devices because of the low power/footprint envelope.

While edge computing is a buzz word in the industry right now, some of the principles aren’t actually new at all. In fact, ruggedization has been a buzz word in the manufacturing industry for a good number of years. What’s changed? The use cases have. The rapid development and growth of IoT and AI applications has led to a proliferation of application use cases — for automation, quality control, safety, personalization, insight, cost / people / asset management. Some of these use cases are turning insight into action in real time. For example, a faulty biscuit needs to be removed from the production line before packaging; an autonomous vehicle needs to detect danger and take action to avoid it; a consumer takes an apple from the store and expects to be charged without having to queue at a till; I’m sure many others instantly come to mind. Decisions need to be made in a fraction of second, so the data needs to be processed at source — the far or extreme edge.

The conversation turned to the types of questions that enterprises ask when they’re specifying their edge hardware. The large, highly distributed enterprises (e.g. energy companies managing remote oil wells, industrial customers with lots of facilities, retail/restaurants with hundreds of locations) tend to start out with a request for high-spec servers designed by their tech teams and then, once they’ve calculated the cost of placing and managing these across all of their locations, they rethink their requirements. Some common questions are:

  • What workloads do I need to run in each location?
  • How much storage and processing capacity do these workloads actually need?
  • How likely am I to add more workloads in future?

Enterprises start with high-end system requirements and end up with a lower-cost solution that is a better match for their needs. This is typically different to in the data center where enterprises have always had a tendency to over-spec their data infrastructure for future growth.

Outside of the data center, the type of hardware required is guided by the use case. For example, if you wanted to put infrastructure into a restaurant there are often space constraints as real estate is at a premium and restaurant owners want to fit as many customers and make as much food as possible to maximize revenue. The only available space is often under a desk or on a wall, or for a larger sit-down restaurant like Pizza Hut, they may have a small office that can moonlight as a ‘server room’.

In industrial settings, there is a proliferation of IoT devices. These tend to be sensors and control devices with a single purpose. This is creating a requirement for local compute / loT gateways to aggregate data, run AI algorithms, and do the control logic. For example, a factory production line may use video analytics to control production quality and the feedback loop needs to communicate with the loT devices instantly to, for example, remove an object from the production line.

In these scenarios, there are multiple types of applications in play — each with different requirements — Windows, VMs, Containers. People talk about containerization as a “silver bullet” to solve everything but there are so many different flavors of containerization and each application drives its own container platform. Enterprises end up running multiple container environments, which is counterintuitive because the purpose of containers is to run all applications in one. And as described above, one of the key challenges in these far and extreme edge environments is space. It’s not possible to have racks of servers to run all of these different environments. Adding a virtualization layer (like Sunlight’s Type 1 hypervisor) allows all of these different applications to run in their own environments on a single (or double for High Availability) edge server. It also makes it easier to manage the infrastructure and applications — especially when they are distributed across hundreds of different locations.

In summary, enterprises looking at hardware options at the edge need to ask:

  • What do I want to achieve?
  • What applications are required to achieve this and what environments do they require?
  • What space/power/network availability do I have at each location?
  • What budget do I have available?
  • What technical resources are available on site and how will the technology be managed?

If in doubt, speak to Sunlight or a Sunlight partner. Sunlight offers off-the-shelf and t-shirt sized edge computing solutions for a variety of use cases.

Sunlight reseller, Lenovo, designs and builds a range of servers specifically for the unique far edge use-case requirements — the ThinkEdge SE30, SE50, SE70 and ThinkSystem SE350 and SE450. The Lenovo edge servers offer the power, performance and flexibility customers need to build next-level edge networks. Coupled with Sunlight’s HyperConverged Edge stack and NexCenter infrastructure manager, the software and hardware stacks are ideal for data-intensive applications at the edge (such as IoT and AI) due to their small footprint and high-performance possibilities. Find out more about the joint solution here.

--

--