What is the QoS of decentralized storage?
I mentioned QoS (Quality of Service) in my previous article (https://medium.com/@omnigeeker/how-to-really-do-decentralized-storage-good-2abc6a9a1cee) , which explains how to design the decentralize storage. But what is QoS exactly? Why is QoS important? I’ll try to answer these two questions below.
What’s QoS?
QoE(Quality of Experience)
The degree of delight or annoyance of the user of an application or service. It results from the fulfillment of his or her expectations with respect to the utility and / or enjoyment of the application or service in the light of the user’s personality and current state.
QoS(Quality of Service)
1.QoS is about provisioning network services to the level of guarantee required by the application layer services.
2.QoS mechanisms provide means for ensuring network resources required by applications when delivered to achieve the expected level of user QoE(Quality of Experience).
Acting as underlying application network platforms in the future, decentralized storage must be able to provide high QoS for developers to develop and deliver high QoE.
What are the key QoS for the basic storage platform?
The decentralized storage platform is also a storage platform and should be able to provide basic QoS if designs for commercial realm. So what QoS should be indlueded in a basic storage platform?
1. High Availability
High availability is the feature of a system. It aims to ensure a stable operational performance, usually refers to the operational time, that above the average period.
The most important thing to measure high availability is the SLA, which is the Service-Level Agreement. It is generally expressed in terms of the number of 9s, 99.9% is three-9, 99.99% is four-9. This generally means that for each stored file, how many percent of the time is working properly to provide services. The more 9s, the higher availability. The following table explains SLA well.
In the decentralized storage platform, the Downtime in the SLA means the time period that a user requests service but dose not get a reply.
2. High Reliability
High reliability provides notifications to the intended recipient that ensure the correctness of data transmission. It is opposed to the unreliable protocol, which does not provide assurance of the delivery of data to the intended recipient.
In the decentralized storage platform, high reliability means that the user can 100% obtain the stored data. It means the platform should keep at least one copy of the data. The specified number of copies should be in balanced.
3. High Performance
The performance refers to other related indicators, including
1) Transfer speed
2) Request and response time
etc…
These data are critical. Because they may change when in different region and time, we need collect data accordingly.
Key QoS for decentralized storage
The decentralized storage has some additional critical QoS besides the ones mentioned above. I’ll introduce them in two parts. One is the QoS of P2P systems, and the other is the QoS of storage miners . Users and miners are not the only nodes in the decentralized storage, there are other nodes that require QoS (such as FileCoin’s indexer miners that provides data indexer service).
1. QoS of P2P systems
The decentralized storage platform uses the data-based Peer to peer transfer method (similar to BitTorrent, PPLive, EDonkey), which enables QoS of the P2P transfer system.
Key QoS for P2P transfer systems:
1) The speed of discovering the other nodes with the same resources.
2) Quickly identify the time difference between high-speed nodes and low speed nodes.
3) Useless protocol bytes rate. In the P2P transfer system, a useless protocol refers to the one that does not consume the actual protocol that transmit content. The useless protocol bytes rate is the ratio of the bytes of the useless protocol to the bytes of all protocols.
4) Data transfer redundancy rate. In the P2P transfer system, sometimes the data is requested from both Peer A and Peer B, because one of the peer has slow transfer rate. Since both PeerA and PeerB transmit the same data, we consider the process is the data transfer redundancy. The data transfer redundancy rate is the proportion of redundant transfer bytes to normal transfer bytes.
5) Data request rejection rate. It is the proportion of rejected requests to all requests. The data request is rejected for several different reasons. For example, the storage miner can not return the data because it cannot find it. Or the hard disk may be broken. Or a logical error happened.It is also possible that the storage miner deliberately does evil action.
6) The proportion of error protocol data. It is the measurement of how many percent of P2P protocols is wrong.Since the P2P transfer system is not so clean, the version upgrade may be inconsistent to generate protocol errors, and there may be malicious attacks by hackers to falsify protocol packets, thereby generating a wrong protocol.
8) NAT traversal indicators. NAT traversal has many indicators, such as traversal time, traversal rate and so on.
2. QoS of storage miners
The health status of the storage miner is also related to the health of the entire network. I think if we want to build a well designed decentralized storage, it is necessary to use ARM-based low-performance and low-power computers to provide storage. Because such machines are cost effective. For storage miners, the lower cost means higher profit. Therefore, it is very important to define the Qos of the storage mining machine.
1) Response speed. This is the average response time after the request is received.
2) Memory cache hit ratio. For the storage service, not all data can be read from the hard disk. The frequently content usually cached in the memory, and the memory cache hit rate is an important indicator for measuring the effectiveness of content cache and the effectiveness of bandwidth utilization.
3) Security consumption, which is the performance ratio of encryption and decryption. For security reason, there is a lot of encryption and decryption work. So we can quantify how much CPU performance and memory resources these jobs take.
4) Request error rate. How many requests do not return data correctly.
5) The proportion of abnormal space usage. The bad part of the disk can not provide correct service. We define it as abnormal disk space. A small damage in the hard disk will affect the entire hard disk (as a Plot in PPIO) that can not provide correct services. This indicator is the proportion of all the abnormal disk space on the entire hard disk and the size of the entire disk space.
3. QoS of other nodes
1) Response speed. This is the average response time after the request has received.
2) Safety consumption. It is the performance ratio of encryption and decryption, similar to that of a storage miner.
Different nodes also have their own unique QoS.
About QoS of PPIO
Most of the current decentralized storage projects, such as FileCoin, SiaCoin, Storj and MaidSafe, did not mention QoS in the Whitepapers, academic papers nor blog posts. I think it is hardly to make a commercial decentralized storage without paying attention to QoS system. So our team will treat QoS as the most important goal when designing the PPIO decentralized storage public chain project.
Article author:Wayne Wong
If you want to reprint, please indicate the source
If you have an exchange about blockchain learning, you can contact me in the following ways: wayne@pp.io