Introduction to IoT Week-9: SUMMARY

Pranay Bhatnagar
7 min readOct 26, 2023

--

Welcome to Week 9 of our Introduction to IoT NPTEL series. In this final stretch, we’ll delve into the fascinating world of IoT applications, exploring how IoT is revolutionizing various industries. Join us as we uncover the real-world impact of IoT technology.

OpenStack is a powerful open-source software designed for creating cloud infrastructure, and it has evolved significantly since its inception in 2010 as a collaborative project between Rackspace Hosting and NASA. This dynamic platform is now supported by numerous major companies like IBM, Cisco, HP, Dell, VMware, Red Hat, SUSE, and Rackspace Hosting, with a large and thriving community.

OpenStack can be used to build both private and public clouds, making it a versatile solution for various cloud computing needs. It has seen numerous versions, from Austin to Zed in 2022, with ongoing development, such as the Bobcat version.

Key components of OpenStack include:

1. Keystone: This component serves as the identity service, providing authentication and authorization features.

2. Horizon: Known as the dashboard, Horizon offers a graphical user interface (GUI) for the software and provides an overview of the other components within OpenStack.

3. Nova: As the compute service, Nova is where you launch and manage your virtual instances.

4. Glance: This image service helps with discovering, registering, and retrieving virtual machine (VM) images, including snapshots.

5. Swift: Functioning as an object storage service, Swift enables secure, cost-effective, and efficient data storage.

6. Neutron: This networking service component allows other services to communicate with one another and provides the capability to create custom networks.

7. Cinder: For block storage virtualization and management, Cinder is a vital component within OpenStack.

8. Heat: Heat handles orchestration, ensuring the automated management of cloud resources.

9. Ceilometer: This component focuses on billing and monitors service usage, helping to track what services are being used and for how long.

OpenStack is a comprehensive solution for building and managing cloud infrastructure, making it a valuable asset for various industries and cloud computing applications.

Sensor Cloud is a sophisticated concept that goes beyond merely integrating sensors with cloud computing. It isn’t just about transmitting sensor data to the cloud; it involves the virtualization of sensor nodes and operates on a pay-per-use model. Sensor Cloud creates a layer between sensor nodes and end-users, providing a bridge for efficient sensor data utilization.

The virtualization concept in Sensor Cloud is akin to how a single computer host can appear as multiple computers, using the concept of Virtual Machine (VM). Pooling physical resources, optimizes IT throughput and reduces costs. This approach offers various benefits, such as sharing resources, encapsulating a complete computing environment, running independently of the underlying hardware, and enabling VM migration.

Sensor Cloud: Difference with WSN

Unlike traditional Wireless Sensor Networks (WSN), Sensor Cloud introduces new actors and dynamics. The key actors include:

1. End-users: These individuals may not be aware of which physical sensors are serving their applications, highlighting the abstraction of the underlying sensor network.

2. Sensor-owners: They purchase and deploy physical sensor devices in different locations and make them available for use within the sensor cloud.

3. Sensor-Cloud Service Providers (SCSP): These are business entities charging end-users based on their Sensor as a Service (Se-aas) usage. Well-known examples include AWS IoT, Microsoft Azure IoT, Google Cloud IoT, and IBM Watson IoT.

The architecture of Sensor Cloud involves end-users registering themselves, selecting templates, and requesting applications. Sensor-owners deploy a mix of heterogeneous or homogeneous physical sensor nodes across various locations. The SCSP plays a managerial role in orchestrating these resources.

Some of the management issues in Sensor Cloud include optimizing the composition of virtual sensor nodes, implementing effective data caching strategies, and setting appropriate pricing models for end-users.

Sensor Cloud: View

Virtual sensors play a pivotal role in efficient sensor network management. Their formation can be optimized based on two conditions: CoV-I for homogenous sensor nodes within the same geographic boundary and CoV-II for heterogeneous sensor nodes spread across various regions. This approach tackles the resource constraints and dynamic changes in sensor conditions that are prevalent in traditional networks.

Dynamic and adaptive caching mechanisms are essential in the Sensor-Cloud paradigm. They contribute to resource utilization efficiency and accommodate varying environmental conditions. When end-users request sensor data through a web interface, physical sensor nodes continuously collect and transmit data to the Sensor-Cloud. In cases where environmental conditions change slowly, preserving sensed data in an unchanged state is essential to prevent unnecessary energy consumption. To achieve this, an internal cache (IC) handles end-users’ requests and decides whether to provide data directly or retrieve it from an external cache (EC). Periodic re-caching from the external cache ensures the data’s availability. These mechanisms ensure streamlined and sustainable data management within the Sensor-Cloud framework.

The Architecture of Caching:

The Dynamic Optimal Pricing for Sensor-Cloud Infrastructure focuses on pricing strategies specifically tailored for Sensor-Cloud services (SeaaS), unlike existing schemes primarily designed for more general services like IaaS and SaaS. This dynamic pricing scheme includes two key components:

1. Pricing Attributed to Hardware (pH): This component deals with the cost associated with the usage of physical sensor nodes. It aims to optimize pricing based on the hardware infrastructure used.

2. Pricing Attributed to Infrastructure (pI): This part of the pricing scheme focuses on the pricing associated with the overall infrastructure of the sensor-cloud service, considering factors beyond the physical sensors.

The primary goals of this proposed pricing scheme are to:

- Maximize the profit of the Sensor-Cloud Service Provider (SCSP).
- Maximize the profit of the sensor owner.
- Ensure end users’ satisfaction by offering competitive and efficient pricing structures.

The emphasis is on enhancing profitability for the SCSP, finding the optimal pricing for end-users, and maintaining user satisfaction. This scheme recognizes the unique requirements of Sensor-Cloud services and seeks to create a dynamic and responsive pricing model that benefits all stakeholders involved in the ecosystem.

Fog Computing is an innovative approach is aimed at addressing the challenges associated with cloud computing in the context of processing Internet of Things (IoT) data. Key points about Fog Computing include:

1. Definition and Origin: Fog computing, also known as “fogging,” is a term coined by CISCO. It represents the concept of extending cloud computing closer to IoT devices. This approach creates an intermediate layer between the cloud and these devices.

2. Primary Objective: The primary aim of fog computing is to overcome the limitations experienced in cloud computing when handling IoT data. By introducing a layer of fog computing, it attempts to enhance data processing efficiency for IoT applications.

3. Data Explosion: The scale of data generated by sensors is immense. By 2020, sensors were projected to contribute to 40% of the world’s data. This surge in data volume, about 2.5 quintillion bytes generated daily, presents significant challenges.

4. IoT Growth: The IoT industry has experienced rapid growth, with predictions of a $1.7 trillion expenditure on IoT devices by 2020. There was also an anticipation of more than 30 billion IoT devices and 250 million connected vehicles worldwide by the same year.

5. Challenges of Cloud Computing: The traditional cloud model struggles to meet the requirements of IoT due to issues related to data volume, latency, and bandwidth. The vast number of online devices generating exabytes of data daily has led to difficulties in processing and storing all this information.

6. Why Fog Computing: Fog computing is essential due to the inadequacy of the current cloud model in handling the requirements of IoT. Challenges related to data volume, latency, and bandwidth necessitate the use of fog computing as an intermediary solution.

Latency
Working of Fog

7. Key Requirements: The key requirements for IoT data processing are reducing data latency, ensuring data security, maintaining operation reliability, and efficiently monitoring data across large geographical areas. These factors drive the adoption of fog computing.

8. Architecture of Fog: The architecture involves extending cloud services to IoT devices through fog, which acts as an intermediary layer. Multiple fog nodes can be deployed, processing sensor data before transmitting it to the cloud. This architecture reduces latency, conserves bandwidth, and optimizes cloud storage usage.

9. Working of Fog: Fog nodes operate based on the type of data they receive. They handle very time-sensitive data at the nearest node, process less time-sensitive data at aggregate nodes, and send non-time-sensitive data directly to the cloud for storage and analysis.

10. Advantages: Fog computing offers several advantages, including improved security, low operation costs, quick decision-making, better privacy, enhanced business agility, and better data handling. It is also suitable for remote and harsh environments.

11. Applications: Fog computing finds applications in real-time health analysis, intelligent power-efficient systems, real-time rail monitoring, pipeline optimization, real-time windmill and turbine analysis, and more.

12. Challenges: Challenges of fog computing include increased power consumption compared to centralized cloud, data security concerns in a distributed network, reliability and fault tolerance, dynamic programming architecture, and the need for real-time analysis across a vast number of nodes.

Fog computing is a pivotal approach addressing the evolving landscape of IoT data processing by bringing cloud capabilities closer to the data source, thereby mitigating key challenges.

As we conclude our 9-week exploration of IoT, fog computing stands out as a crucial bridge between the cloud and IoT devices. It addresses data challenges and offers real-time solutions. From healthcare to power efficiency and more, its applications are vast.

See you in Week 10. Till then stay focused!

--

--