Introduction to IoT Week-8: SUMMARY

Pranay Bhatnagar
7 min readOct 26, 2023

--

Welcome back to our exciting journey through the world of the Internet of Things (IoT). We’ve now arrived at Week 8 of our comprehensive 12-week series, and the horizon of IoT knowledge is expanding further.

This week, we’ll explore SDN for mobile networking, Rule Placement, ODIN, Ubi-Flow, Mobi-Flow, Data-Centric Networking, Anomaly Detection, Cloud Computing, and service models, shedding light on the crucial concepts that drive the IoT ecosystem.

Whether you’re a newcomer seeking insights into IoT or a seasoned enthusiast looking to deepen your understanding, this blog series aims to cater to all. So, fasten your seatbelts as we embark on another enlightening adventure into the realm of IoT in Week 8.

In traditional wireless mobile networks, several problems are prevalent, including difficulties in scaling the network, managing it effectively, inflexibility in introducing new services, and high costs in terms of capital and operational expenditure.

SDN for Mobile Networking: The adoption of Software-Defined Networking (SDN) in mobile networking has introduced several key advantages. SDN’s Flow-Table Paradigm is well-suited for end-to-end communication across various technologies, providing logically centralized control for efficient base-station coordination, path management, and network virtualization. These benefits include centralized control of devices from different vendors, faster integration of new services, and the abstraction of network control and management, ultimately providing a more user-friendly network experience.

This transformation in mobile networking brings exciting possibilities and improved efficiency to the world of mobile communication.

Rule Placement at Access Devices: The rule placement at access devices within Software-Defined Networking (SDN) for mobile networks presents several challenges. General OpenFlow does not directly support wireless networks, and it requires a modified version of OpenFlow to be effective. Mobile users in wireless networks introduce a highly dynamic environment, necessitating frequent changes in rule placement. Additionally, the presence of diverse devices in the network adds complexity to the rule management.

To address these challenges, various approaches have been developed:

1. ODIN: ODIN employs an agent placed at access points to facilitate communication with a controller. It consists of two components, the Odin agent positioned with physical devices and the Odin master located at the controller end. ODIN focuses on the conversion of 802.11 and LVAP (Light Virtual AP).

2. Ubi-Flow: Ubi-Flow specializes in mobility management in Software-Defined IoT networks, offering scalable control of access points and fault tolerance. It incorporates features like flow-scheduling, network partitioning, network matching, and load balancing.

3. Mobi-Flow: Mobi-Flow takes a proactive approach to rule placement based on users’ movements within the network. It predicts users’ locations at (t+1) time while they are at (t) time and places flow rules at access points associated with the predicted locations. This prediction is done using an Order-K Markov predictor, and linear programming is used to select the optimal access points for rule placement.

These approaches address the dynamic and heterogeneous nature of wireless mobile networks within the context of SDN, providing solutions for efficient rule placement at access devices.

Rule placement within the backbone network of an IoT infrastructure can leverage existing schemes designed for wired networks. Load balancing is a crucial consideration, given the dynamic nature of IoT networks, and dynamic resource allocation methods can be incorporated for improved efficiency.

Data-centric networking in this context includes addressing two types of flows:

1. Mice-Flow: Wildcard rules are placed to handle these flows effectively.
2. Elephant Flow: Exact match rules are employed for optimal handling.

To ensure efficient flow forwarding, it’s essential to classify the flows before inserting flow rules at the network switches.

Anomaly Detection in IoT Network: In addition, anomaly detection is vital in IoT networks. Monitoring the network through OpenFlow enables the detection of anomalies by observing individual flows and collecting port statistics from network switches. Anomalies, which may lead to a surge in network traffic, can be detected through this flow monitoring.

Experimenting with Wireless Network: For experimentation with wireless networks, tools like Mininet-WiFi can be utilized to deploy a network that supports both wired (Ethernet protocol) and wireless (Wi-Fi protocol, IEEE 802.11 group) communication. ONOS can be employed to manage controllers effectively in this context.

Cloud computing is a model that enables convenient, on-demand access to a shared pool of configurable computing resources, including network infrastructures, servers, storage, and applications, as defined by NIST. It represents an evolution beyond utility computing, offering a highly abstracted computation and storage model that can be rapidly allocated and released with minimal management effort. This model features essential characteristics, service models, and deployment models, and it provides on-demand services accessible from anywhere at any time.

The business advantages of cloud computing are significant, including minimal upfront infrastructure investment costs, real-time infrastructure availability, improved resource utilization, usage-based costing, and reduced time to market. Cloud computing also boasts general characteristics like enhanced agility in resource provisioning, device and location independence (ubiquity), multitenancy for cost sharing, dynamic load balancing, high reliability and scalability, low cost, and maintenance, as well as improved security and access control.

NIST Visual Model of Cloud Computing:

Characteristics of Cloud Computing: Cloud computing exhibits several essential characteristics, including:

1. Broad Network Access: Cloud resources should be accessible over the network using standard mechanisms and support various client platforms, such as mobile devices, laptops, and PDAs.

2. Rapid Elasticity: Cloud resource allocation should be automatic, dynamic, and elastic. It should allow for rapid scaling up and down to meet changing demands.

3. On-Demand Self-Service: Users should be able to provision and manage cloud resources as needed, offering a self-service approach.

4. Resource Pooling: Cloud providers should pool resources to serve multiple users, allocating resources according to user demand. This multi-tenant model optimizes resource usage.

5. Measured Service: Resource usage is monitored and recorded, allowing for dynamic optimization and cost control while maintaining transparency between the provider and consumer.

Cloud computing components encompass various clients/end-users, services, applications, platforms, storage, and infrastructure, including Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) service models. These services can be delivered through different deployment models like public cloud, private cloud, hybrid cloud, community cloud, distributed cloud, multi-cloud, and inter-cloud, each catering to specific use cases and requirements.

In summary, cloud computing offers flexibility, scalability, and accessibility while optimizing resource utilization and service delivery.

Comparison Of Different Deployment Models:

Service Models:

Cloud computing is a paradigm that provides on-demand access to a shared pool of configurable computing resources over a network, as defined by NIST. It has evolved from utility computing and offers a high level of abstraction, rapid allocation, and key characteristics, including on-demand services accessible from anywhere. Businesses benefit from cloud computing by reducing upfront infrastructure costs, ensuring real-time resource availability, improving resource utilization, reducing time to market, and allowing for new business model trials.

Cloud computing’s essential characteristics include broad network access, rapid elasticity, on-demand self-service, resource pooling, and measured service. The service models include Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS).

IaaS provides basic building blocks like servers, storage, and network access. Users rent these resources, while cloud service providers manage them. It suits new businesses, rapidly growing organizations, and applications with fluctuating demands. It offers scalability, manageability, and accessibility, and its infrastructure resources can be monitored, allocated, and optimized automatically. IaaS can be public, private, or hybrid.

PaaS allows developers to create and deploy applications, simplifying the development process. It provides tools for design, development, and hosting, abstracting underlying infrastructure complexities. Users can rent virtualized servers and associated services, offering elastic scaling.

SaaS delivers completed end-user applications run and managed by service providers. It operates over the Internet, supporting a pay-as-you-go model with remote access via web browsers. SaaS architecture emphasizes scalability, multi-tenancy, and configurability.

Comparison of Different Service Models:

Cloud security is vital, focusing on network, host, and application-level security, as well as data security through identity management, encryption, and access control. Trust and reputation, risk assessment, and authentication are essential for selecting appropriate cloud providers.

Cloud simulations using tools like CloudSim and CloudAnalyst facilitate pre-deployment testing, performance evaluation, and issue detection in the rapidly growing cloud computing environment. These simulators help evaluate services, control environments, and design countermeasures.

In summary, cloud computing offers diverse service models (IaaS, PaaS, SaaS) with essential characteristics like scalability, on-demand self-service, and efficient resource allocation. Security, trust, and risk assessment play a crucial role in selecting cloud providers, while cloud simulations help assess and optimize cloud services.

As Week 8 of our 12-week IoT exploration draws to a close, we’ve uncovered the critical role of cloud computing. We’ve seen how Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) offer scalable solutions with unique advantages and security considerations.

We’ve also explored trust, reputation, and risk assessment in the cloud, emphasizing data security and privacy. Cloud simulations, like CloudSim and CloudAnalyst, enable us to optimize services and ensure performance.

This journey has shown us that cloud computing is the bedrock of IoT, driving data flow and analytics. We’re now better equipped to navigate the IoT-cloud relationship.

Stay tuned for more IoT insights in the weeks ahead, as we dive deeper into sensors, actuators, and IoT applications. Our goal is to provide valuable knowledge for your IoT journey.

Thank you for being part of this exciting series, and stay curious, connected, and tuned in!

--

--