Top-10 Vulnerabilities in Substrate-based Blockchains Using Rust

Bloqarl
Rektoff
Published in
15 min readNov 14, 2023
Top-10 Vulnerabilities in Substrate-based Blockchains Using Rust

The information in this article is reproduced from a Polkadot forum post headlined «Common Vulnerabilities in Substrate/Polkadot Development», written in September 2023 by Vincent Di Giambattista, chief information security officer for Parity Technologies.

Substrate is a powerful and flexible framework for building blockchains, that utilizes Rust.

It is the chosen framework for several prominent and large-scale protocols, including Manta Network, Centrifuge, Band Protocol, and Aleph Zero. These diverse applications demonstrate Substrate’s adaptability and strength in powering a wide range of blockchain solutions.

Nonetheless, while Rust has many built-in exploit mitigations, it is not hacker-proof. Understanding its vulnerabilities and acting on them will be a game changer.

Let’s explore the Top 10 Vulnerabilities in Substrate-based blockchains.
We will:

  • guide you through these vulnerabilities
  • explain why they’re risky
  • showcase the impact they might have
  • show you how to mitigate them

TOP 10 Vulnerabilities:

  • Insecure Randomness
  • Storage Exhaustion
  • Unbounded Decoding
  • Insufficient Benchmarking
  • XCM Arbitrary Execution
  • XCM DoS
  • Unsafe Arithmetic
  • Unsafe Conversion
  • Replay Issues
  • Outdated Crates

Insecure Randomness

Description:

It arises from the reliance on weak or predictable randomness sources, which could potentially be manipulated or anticipated by malicious actors.

Randomness Collective Flip Pallet (Insecure Approach):

The Randomness Collective Flip pallet utilizes the hashes of the previous 81 blocks to generate a random value.

fn on_initialize(block_number: BlockNumberFor<>) -> Weight {
let parent_hash = <frame_system::Pallet<T>>::parent_hash();
/// ...
<RandomMaterial<T>>::mutate(|ref mut values| {
if values.try_push(parent_hash).is._err() {
let index = block_number_to_index::<T>(block_number);
values[index] = parent_hash;
}
});
/// ...
}
/// ...
fn random(subject: &[u8]) -> (T::Hash, BlockNumberFor<T>) {
let block_number = <frame_system::Pallet<T>>::block_number();
let index = block_number_to_index::<T>(block_number);

let hash_series = <RandomMaterial<T>>::get();
let seed = if !hash_series.is_empty() {
// Always the case after block 1 is initialized
hash_series
.iter()
.cycle()
.skip(index)
// RANDOM_MATERIAL_LEN = 81
.take(RANDOM_MATERIAL_LEN as usize)
.enumerate()
.map(|(i,h)|
(i as i8, subject, h)
.using_encoded(T::Hashing::hash)
).triple_mix()
} else {
T::Hash::default()
};
(seed, block_number.saturating_sub(RANDOM_MATERIAL_LEN.into()))
}
  • The on_initialize function updates a storage item RandomMaterial with the parent block's hash at each block, cycling through the list of stored hashes.
  • The random function then derives a random value from a combination of the block number, a subject input, and the hash series.

Impact

Why is it insecure?

This is particularly risky in applications like lotteries or voting systems on the blockchain, where the integrity and unpredictability of random numbers are crucial.

This approach shown above is less secure because the randomness is directly influenced by past block hashes, which could potentially be manipulated or anticipated.

Mitigation

Instead, it should be used a more secure approach utilizing VRF (Verifiable Random Function) from the Pallet BABE.

VRF from Pallet BABE

Pallet BABE employs a Verifiable Random Function (VRF) to generate random values.

VRFs are cryptographic primitives that provide verifiably random outputs, which are difficult for adversaries to predict or influence.

The randomness generated through this method is thus more secure and robust against manipulative actions.

fn compute_randomness(
last_epoch_randomness: BabeRandomness,
epoch_index: u64,
rho: impl Iterator<Item = BabeRandomness>,
rho_size_hint: Option<usize>,
) -> BabeRandomness {
let mut s = Vec::with_capacity(40 + rho_size_hint.unwrap_or(0) * RANDOMNESS_LENGTH);
s.extend_from_slice(&last_epoch_randomness);
s.extend_from_slice(&epoch_index.to_le_bytes());
for vrf_output in rho {
s.extend_from_slice(&vrf_output[..]);
}
sp_io::hashing::blake2_256(&s)
}
/// ...
fn randomness_change_epoch(next_epoch_index: u64) -> BabeRandomness {
let this_randomness = NextRandomness::<T>::get();
let segment_idx: u32 = SegmentIndex::<T>::mutate(|s| sp_std::mem::replace(s, 0));
// overestimate to the segment being full.
let rho_size = (segment_idx.saturating_add(1) * UNDER_CONSTRUCTION_SEGMENT_LENGTH) as usize;
let next_randomness = compute_randomness(
this_randomness,
next_epoch_index,
(0..segment_idx).flat_map(|i| UnderConstruction::<T>::take(&i)),
Some(rho_size),
);
NextRandomness::<T>::put(&next_randomness);
this_randomness
}
  • The compute_randomness function aggregates the randomness from the last epoch, the epoch index, and an iterator of VRF outputs from the prior epoch to compute new randomness.
  • The randomness_change_epoch function is called once every epoch changes to update the randomness, utilizing the compute_randomness function.

Storage Exhaustion

Description:

Storage Exhaustion occurs when inadequate charging for storage use facilitates malicious exploitation, slowing down the system and escalating operational costs.

Example:

// Vulnerable Smart Contract
#[ink::contract]
mod storage_exhaustion_contract {
#[ink(storage)]
pub struct StorageContract {
data: StorageHashMap<AccountId, Vec<u8>>,
}

impl StorageContract {
#[ink(constructor)]
pub fn new() -> Self {
Self {
data: StorageHashMap::new(),
}
}

// Vulnerable Function: No checks on storage use or charges
#[ink(message)]
pub fn store_data(&mut self, user: AccountId, data: Vec<u8>) {
self.data.insert(user, data);
}
}
}

Simplified Explanation with an analogy

Imagine a public storage facility with very cheap rental rates. Soon, a person realizes they can rent out a large portion of the units at a low cost.

They start hoarding, filling up unit after unit with junk, making it difficult for others to find available units. Over time, the facility gets crowded, and the management has to spend more on maintenance.

In this analogy, the storage facility is the blockchain, the cheap rental rate is the low storage cost, the hoarder is the attacker, and the junk represents useless data bloating the storage, which slows down the system and raises operational costs.

Impact:

Attackers can exploit low storage costs to bloat the storage, making the system sluggish and expensive to maintain.

Mitigation:

Implement checks to ensure the cost charged to users is proportional to the storage used, and consider setting limits on the amount of data that can be saved to storage to prevent abuse.

Let’s see how will our contract look with the advised mitigation:

// Mitigated Smart Contract
#[ink::contract]
mod mitigated_storage_contract {
#[ink(storage)]
pub struct StorageContract {
data: StorageHashMap<AccountId, Vec<u8>>,
cost_per_byte: Balance, // New field to store the cost per byte
}

impl StorageContract {
#[ink(constructor)]
pub fn new(cost_per_byte: Balance) -> Self { // Accept cost_per_byte as a parameter
Self {
data: StorageHashMap::new(),
cost_per_byte, // Initialize cost_per_byte
}
}

// Mitigated Function: Checks storage use and charges appropriately
#[ink(message)]
pub fn store_data(&mut self, user: AccountId, data: Vec<u8>) -> Result<(), &str> {
let storage_cost = data.len() as Balance * self.cost_per_byte; // Use cost_per_byte to calculate storage_cost
let user_balance = self.env().balance(user);
if user_balance < storage_cost {
return Err("Insufficient balance for storage cost");
}
self.env().transfer_fee(user, storage_cost); // Assume a function to transfer fee
self.data.insert(user, data);
Ok(())
}
}
}
  1. Added cost_per_byte Field:
  • A new field cost_per_byte was introduced to StorageContract struct to store the cost per byte of storage.

2. Modified Constructor:

  • The constructor new now accepts cost_per_byte as a parameter, allowing you to set the cost per byte during contract creation.

3. Updated store_data function:

  • In the store_data function, the storage_cost is now calculated using data.len() as Balance * self.cost_per_byte instead of data.len() as Balance, utilizing the cost_per_byte field to determine the storage cost.

Unbounded Decoding

Description

The Unbounded Decoding vulnerability in Polkadot/Substrate arises when there’s no depth limit set for decoding objects like calls (extrinsics), allowing attackers to craft highly nested calls that the system struggles to decode, leading to a stack overflow.

pub fn dispatch_all(/* ... */) -> DispatchResultWithPostInfo {
T::DispatchOrigin::ensure_origin(origin)?;

/// Next line can through stack overflow
let call = <T as Config>::RuntimeCall::decode(&mut &call[..])
.map_err(|_| Error::<T>::Undecodable)?;
///...
}

Simplified Explanation with an analogy

Imagine a city road system as the network. Normally, cars (data) flow smoothly. Now, imagine a truck (maliciously crafted data) with an excessively tall load tries to pass under a low bridge (the decoding function), but it gets stuck because it’s too tall (too deeply nested).

This causes a traffic jam (stack overflow), blocking all cars behind it from moving (preventing validators from generating new blocks). Over time, the entire city’s traffic (network) comes to a halt because of this blockage.

The depth limit acts like a height restriction on the bridge, ensuring only trucks (data) of a manageable size can pass through, keeping traffic flowing smoothly.

Impact

A stack overflow from highly nested calls can disrupt the network’s operation, possibly preventing validators from generating new blocks and halting the network.

Mitigation

To mitigate this vulnerability, it’s advised to set a depth limit for decoding objects. This can be done by substituting the decode method with a decode_with_depth_limit method in the code, ensuring that the decoding process doesn't exceed a certain depth and preventing stack overflows.

pub fn dispatch_all(/* ... */) -> DispatchResultWithPostInfo {
T::DispatchOrigin::ensure_origin(origin)?;

/// Next line is now protected against stack overflow by setting
/// a depth limit
let call = <T as Config>::RuntimeCall::decode_with_depth_limit(
&mut &call[..],
MAX_DEPTH
)
.map_err(|_| Error::<T>::Undecodable)?;
///...
}

In this modified code, MAX_DEPTH is a constant that defines the maximum decoding depth allowed, providing a guard against stack overflow caused by excessively nested calls.

Insufficient Benchmarking

Description

It arises when an extrinsic (a type of function call in Substrate) does not have accurate benchmarking.

Benchmarks help in understanding the computational and storage costs of executing an extrinsic.

#[pallet::call]
impl<T: Config> Pallet<T> {
/// Index and store data off chain.
#[pallet::call_index(0)]
#[pallet::weight(T::weightInfo::store())]
pub fn store(origin: OriginFor<T>, remark: Vec<u8>) -> DispatchResultWithPostInfo {
ensure!(!remark.is_empty(), Error::<T>::Empty);
let sender = ensure_signed(origin)?;
let content_hash = sp_io::hashing::blake2_256(&remark);
let extrinsic_index = <frame_system::Pallet<T>>::extrinsic_index()
.ok_or_else(|| Error::<T>::BadContext)?;

sp_io::transaction_index::index(extrinsic_index, remark.len() as u32, content_hash);
Self::deposit_event(Event::stored { sender, content_hash: content_hash_into() });

Ok(().into())
}
}

In this code, the weight is being calculated without considering the size of the remark vector, which means the weight could be underestimated.

Impact

Incorrect benchmarking can slow down the network and allow attackers to spam the system by continuously calling under-benchmarked extrinsics at lower-than-actual costs.

Simplified Explanation with an analogy

Imagine a public bus with a fixed ticket price, regardless of the distance traveled. Now, if a group of people start taking long-distance rides frequently, the fuel cost for the bus company increases, but the ticket revenue remains the same.

In the code scenario, think of the bus as the network, the ticket price as the benchmarked weight, and the group of people as attackers.

If the ticket price (weight) doesn’t accurately reflect the cost of the trip (execution cost), it could lead to financial loss for the bus company (network slowdown or spam).

Hence, ticket prices should be set accurately for different distances, much like how benchmarks should accurately reflect the computational cost for different extrinsic inputs.

Mitigation

Perform benchmarks using worst-case scenario conditions to ensure accurate weight calculation, as demonstrated in the provided code snippet below.

This practice helps in better understanding and accounting for the execution costs, thus preventing potential spamming and slowdown of the network.

benchmarks! {
store {
let l in 1 .. 1024*1024;
let caller : T::AccountId = whitelisted_caller();
}: _(RawOrigin::Signed(caller.clone()), vec![0u8; l as usize])
verify {
assert_last_event::<T>(Event::Stored { sender: caller, content_hash: sp_io::hashing::blake2_256(
&vec![0u8; l as usize]).into() }.into());

impl_benchmark_test_suite!(Remark, crate::mock::new_test_ext(), crate::mock::Test);
}
}

XCM Arbitrary Execution

Description

XCM Arbitrary Execution vulnerability arises in Polkadot/Substrate when XCM (Cross-Consensus Messaging) is improperly configured, potentially allowing attackers to interfere with the system or perform unauthorized actions.

pub struct XcmConfig;

impl Config for XcmConfig {
/// ...
type SafeCallFilter = Everything;
/// ...
}

Here, Everything is set as the SafeCallFilter, which essentially allows all calls to go through without any filtering.

Impact

Attackers could exploit this vulnerability to execute any Transact instruction, possibly leading to unauthorized actions or system disruptions.

Simplified Explanation with an analogy

Imagine a building with a high-security door, but someone left it set to “let anyone in” mode. In Polkadot, XCM (Cross-Consensus Messaging) is like that door, controlling who can send messages or transactions.

If not set up correctly, it’s like leaving the door open, allowing anyone, including malicious actors, to send messages or transactions that could harm the system.

By adjusting the settings on the door (modifying the XCM configuration), like setting up a guest list (SafeCallFilter), you ensure only authorized individuals (approved calls) can enter, keeping the building (system) secure.

Mitigation

Limiting the usage of XCM execute and send operations can mitigate this issue. Specifically, changing the Everything value to a more detailed SafeCallFilter, as shown in the provided code, can filter out undesired calls, enhancing the system's security.

XCM DoS

Description

XCM (Cross-Consensus Messaging) DoS (Denial of Service) vulnerability arises when inadequate XCM setup allows attackers to overload the system by spamming XCM messages to other chains.

#[ink::contract]
mod xcm_vulnerable_contract {
#[ink(storage)]
pub struct XcmContract;

impl XcmContract {
[...]

#[ink(message)]
pub fn handle_xcm_message(&self, message: XcmMessage) {
// No validation of incoming messages
self.process_message(message);
}

[...]
}
}

handle_xcm_message represents a simplified vulnerable XCM handler that processes every incoming message without validation.

Impact

Attackers exploiting this vulnerability can cause a bottleneck in the XCM queues of the receiving chain, potentially stopping it from processing new messages or even dropping incoming messages, thereby disrupting the network’s functionality.

Simplified Explanation with an analogy

Imagine you own a small post office, and you accept packages from anyone to deliver.

Now, suppose a malicious person decides to overload your post office by sending in an enormous number of packages at once, so many that you can’t even move inside the post office.

This scenario is similar to the XCM DoS vulnerability, where an attacker spams a blockchain with so many messages that the system can’t handle it, causing a backlog or even halting the processing of new messages. It’s like a traffic jam in the network, preventing normal operations.

Mitigation

Properly setting up XCM to filter incoming calls and allowing interactions only with trusted parachains can mitigate this issue, preventing DoS attacks by blocking malicious or spammy XCM messages from untrusted sources.

#[ink::contract]
mod xcm_secure_contract {
#[ink(storage)]
pub struct XcmContract;

impl XcmContract {
[...]

#[ink(message)]
pub fn handle_xcm_message(&self, message: XcmMessage) {
if self.is_valid_source(message.source) && self.is_valid_content(message.content) {
self.process_message(message);
}
}

[...]
}
}

Unsafe Arithmetic

Description

Rust does provide protection against overflow and underflow in arithmetic operations by default.

When compiling in debug mode, arithmetic operations will panic (crash) on overflow or underflow. In release mode, Rust will perform “wrapping” arithmetic, where values wrap around on overflow or underflow.

However, in Polkadot and other blockchain frameworks, arithmetic is often used to handle critical operations like token transfers, so the default behavior may not be desired.

Unsafe Arithmetic vulnerability in Polkadot/Substrate arises from the potential overflows or underflows in mathematical operations which can lead to incorrect calculation results. This may occur when unsafe math operations are used in the code, which do not handle these cases properly.

fn transfer(sender: &AccountId, receiver: &AccountId, amount: Balance) -> DispatchResult {
let sender_balance = Balances::<T>::get(sender);
let receiver_balance = Balances::<T>::get(receiver);

// Unsafe arithmetic can lead to overflow/underflow
let new_sender_balance = sender_balance - amount;
let new_receiver_balance = receiver_balance + amount;

Balances::<T>::insert(sender, new_sender_balance);
Balances::<T>::insert(receiver, new_receiver_balance);

Ok(())
}

Let’s consider this simple token transfer function above in a Polkadot smart contract.

If the function directly subtracts the transferred amount from the sender’s balance and adds it to the receiver’s balance without any checks, an overflow or underflow error might occur.

Impact

Attackers can exploit this vulnerability to manipulate the system, causing serious inconsistencies. For instance, in token transfer scenarios, it could lead to incorrect account balances, enabling attackers to gain unauthorized assets.

Mitigation

Utilizing safe math functions like checked_add or checked_sub, which check for arithmetic errors, is advised to mitigate this issue. Thoroughly reviewing the code for unsafe math operations and replacing them with safe alternatives will significantly reduce the risk associated with this vulnerability.

Here is how to mitigate the code shown above:

fn transfer(sender: &AccountId, receiver: &AccountId, amount: Balance) -> DispatchResult {
[...]

// Use checked arithmetic to prevent overflow/underflow
let new_sender_balance = sender_balance.checked_sub(amount).ok_or(Error::<T>::Overflow)?;
let new_receiver_balance = receiver_balance.checked_add(amount).ok_or(Error::<T>::Overflow)?;

[...]
}

Unsafe Conversion

Description

Unsafe Conversion vulnerability occurs when converting one numerical type to another without adequate checks, potentially causing errors that attackers could exploit.

pub fn execute(from: H160, transaction: &Transaction, config: Option<H160>) 
-> Result<(Option<H160>, Option<H160>, CallOrCreateInfo), DispatchErrorWithPostInfo<PostDispatchInfo>> {
/// ...
match action {
ethereum::TransactionAction::Call(target) => {
let res = match T::Runner::call(
/// ...
gas_limit.low_u64(), // Possible overflow
/// ...
) {
Ok(res) => res,
Err(e) => {
/// ...
}
};
/// ...
}
/// ...
}
}

As you can see, it is using low_64 to downcast the gas_limit variable so as to be able to use it in the Runner::call method. However, this method can lead to overflows.

Impact

Errors from unsafe conversions could lead to overflows or incorrect values, which attackers might leverage to induce inconsistencies or undesirable behaviors in the system.

Mitigation

Ensure proper checks during type conversion, avoid downcasting, and utilize safe conversion methods like unique_saturated_into. Reviewing code for unsafe conversions and rectifying them is crucial for preventing this vulnerability.

let res = match T::Runner::call(
/// ...
gas_limit.unique_saturated_into(), // Mitigated overflow
/// ...
) {
Ok(res) => res,
Err(e) => {
/// ...
}
};

Replay Issues

Description

Replay Issues vulnerability arises from improper handling of transaction nonces, which could potentially allow attackers to repeat transactions and slow down the system.

Here’s how a simplified ink! smart contract with a vulnerability to Replay Issues might look:

#[ink::contract]
mod replay_vulnerable_contract {
#[ink(storage)]
pub struct ReplayVulnerableContract {
executed_transactions: StorageHashMap<TransactionId, bool>,
}

#[ink::event]
pub struct TransactionProcessed {
#[ink(topic)]
id: TransactionId,
}

impl ReplayVulnerableContract {
#[ink(constructor)]
pub fn new() -> Self {
Self {
executed_transactions: StorageHashMap::new(),
}
}

#[ink(message)]
pub fn process_transaction(&mut self, id: TransactionId) {
// Missing nonce verification here
self.executed_transactions.insert(id, true);
self.env().emit_event(TransactionProcessed { id });
}
}

pub type TransactionId = u64;
}

Impact

Attackers exploiting this vulnerability can repeat transactions, causing network congestion, and possibly leading to slowed or halted network operations.

Mitigation

Ensuring nonces are correctly set-up in the system logic and implementing checks to prevent transaction repetition can mitigate this vulnerability. For instance, in the Frontier security issue, transaction validations (nonces) were added back to the State Transition Function (STF) to prevent replay attacks.

Frontier is a Substrate pallet for Ethereum compatibility. The State Transition Function (STF) is crucial in validating transactions while blocks are being created. By reintroducing transaction validations, specifically checking nonces (which are unique values associated with transactions), into the STF, the system can ensure that transactions are valid and haven’t been submitted before, thus preventing replay attacks where a malicious actor might try to resubmit the same transaction to the network to cause disruptions or exploit the system.

To mitigate the Replay Issues from the code shown above, you would need to add a nonce verification step in the process_transaction function to ensure that a transaction with the given TransactionId hasn't been executed before.

Here's how you could modify the code:

#[ink::contract]
mod replay_protected_contract {
#[ink(storage)]
pub struct ReplayProtectedContract {
executed_transactions: StorageHashMap<TransactionId, bool>,
}

#[ink::event]
pub struct TransactionProcessed {
#[ink(topic)]
id: TransactionId,
}

impl ReplayProtectedContract {
[...]

#[ink(message)]
pub fn process_transaction(&mut self, id: TransactionId) -> Result<(), &str> {
if self.executed_transactions.contains_key(&id) {
return Err("Transaction already processed");
}
self.executed_transactions.insert(id, true);
self.env().emit_event(TransactionProcessed { id });
Ok(())
}
}

pub type TransactionId = u64;
}

process_transaction function checks if a transaction with the given TransactionId has already been executed by looking up executed_transactions. If it finds that the transaction has already been executed, it returns an error; otherwise, it proceeds to process the transaction and emit an event.

Outdated Crates

Description

Outdated Crates vulnerability arises when outdated, unsafe, or incompatible versions of dependencies (crates) are used. This inconsistency can expose the system to known vulnerabilities and incompatibility issues.

The following pallet uses dependencies from different versions of Substrate. This could lead to serious incompatibility problems:

[package]
name = "sample-pallet"
description = "Sample pallet with incoherent dependencies"
version = "1.0.0"
edition = "2021"

[dependencies]
codec = { package = "parity-scale-codec", version = "3.6.1" }
scale-info = { version = "2.5.0", features = ["derive"] }
log = { version = "0.4.14", default-features = false }

frame-system = { version = "4.0.0-dev", default-features = false, git =
"https://github.com/paritytech/substrate.git", branch = "polkadot-v1.0.0" }
frame-support = { version = "4.0.0-dev", default-features = false, git =
"https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.0" }
pallet-balances = { version = "4.0.0-dev", default-features = false, git =
"https://github.com/paritytech/substrate.git", branch = "polkadot-v0.8.0" }
pallet-message-queue = { version = "7.0.0-dev", default-features = false, git =
"https://github.com/paritytech/substrate.git", branch = "polkadot-v1.0.0" }
pallet-uniques = { version = "4.0.0-dev", default-features = false, git =
"https://github.com/paritytech/substrate.git", branch = "polkadot-v0.9.2" }

Impact

Attackers could exploit known weaknesses in outdated crates to compromise the system. Moreover, inconsistent versioning across dependencies can lead to incompatibility problems, causing system malfunctions or unexpected behavior.

Crates are packages in the Rust programming language.
They can be libraries or binaries (executables). Crates can be shared on crates.io, which is the official Rust package registry, or they can be private code that you use in your own projects. Each crate has its own local namespace, and can optionally export some of its types, functions, etc. into a public interface.

Mitigation

Always use the newest and safest versions of dependencies, ensuring consistent versioning across all crates. Keep track of any new risks and fixes, updating the dependencies accordingly to maintain system security and functionality.

In order to fix the Cargo.toml file above, all the Substrate dependencies should be aligned to the same version (polkadot-v1.0.0) which should help mitigate the risk associated with outdated or incompatible crates.

Verbosity Issues

Description

Verbosity Issues concern the lack of detailed logs from collators, nodes, or RPC in a Polkadot/Substrate network, making it challenging to diagnose issues when anomalies like crashes or network halts occur.

Here’s a simplified example illustrating a verbosity issue:

#[ink::contract]
mod example_contract {
#[ink(storage)]
pub struct ExampleContract {
value: u32,
}

impl ExampleContract {
#[ink(constructor)]
pub fn new() -> Self {
Self { value: 0 }
}

#[ink(message)]
pub fn set_value(&mut self, new_value: u32) {
self.value = new_value;
}
}
}

In this contract, the set_value method updates the value state variable but doesn't provide any logging.

Impact

Insufficient logging can significantly delay the identification and resolution of issues, particularly during critical incidents like a network halt, causing extended downtime and resource drain.

Mitigation

Implementing comprehensive logging in critical parts of your pallets, and regularly reviewing these logs for suspicious activity can mitigate this vulnerability. Ensuring adequate verbosity will expedite the troubleshooting process, reducing downtime.

To mitigate verbosity issues, you could add an event to log the change:

#[ink::contract]
mod example_contract {
#[ink(storage)]
pub struct ExampleContract {
value: u32,
}

#[ink(event)]
pub struct ValueChanged {
#[ink(topic)]
new_value: u32,
}

impl ExampleContract {
[...]

#[ink(message)]
pub fn set_value(&mut self, new_value: u32) {
self.value = new_value;
self.env().emit_event(ValueChanged { new_value });
}
}
}

Now, with the ValueChanged event, you have logging in place to track changes to the value state variable, which would help in diagnosing issues related to value changes.

As we come to the end of our journey through the intricate world of Substrate-based blockchain vulnerabilities, we hope you feel more equipped and informed.

It’s clear that while Rust provides a strong foundation, there’s always more to learn and apply to ensure our digital ecosystems remain secure. Take these insights, use them to enhance your projects, and continue to explore the ever-evolving landscape of blockchain technology.

Thank you for joining us on this enlightening path — let’s keep building a safer, smarter blockchain community. Keep innovating, keep improving, and until next time, happy coding!

Follow Rektoff:

Twitter: https://twitter.com/rektoff_xyz

RustCollective: https://discord.com/invite/QSHRSHqXMA

Website: https://www.rektoff.xyz/

More links: https://linktr.ee/rektoff

To order Security Review (Audit), contact:

Telegram: t.me/gregorymakodzeba

Email: greg@rektoff.xyz

References:

https://forum.polkadot.network/t/common-vulnerabilities-in-substrate-polkadot-development/3938

--

--

Bloqarl
Rektoff

| Enhance Web3 Security | Build On-Chain | Master DeFi | If you share my same goals, I share everything I learn. My Twitter https://twitter.com/TheBlockChainer