Sitemap
Aptos Labs

Bringing the Future On-Chain

Introducing New Utilities and Collections in Aptos Framework

6 min readMar 21, 2025

--

We’re introducing two powerful additions to the Aptos Framework: Ordered Maps and Big Ordered Maps. These new key-value collections extend the power and flexibility of the Move language — bringing efficient, predictable performance to a broad range of on-chain use cases, from real-time DeFi systems to high-throughput onchain indexing and analytics.

To make these data structures even more performant — and to unlock new optimization strategies for developers — we’re also introducing new Move-native functions: mem::swap, vector::move_range, cmp::compare, and bcs::constant_serialized_size. Together, these additions further strengthen Move on Aptos as the platform of choice for developers building fast, secure, and scalable on-chain applications.

Ordered Map

OrderedMap is a struct, that is fully stored within its parent resource. Currently, it’s implemented as a sorted vector. Because resources are limited in size and it benefits from optimized vector operations (memcpy), it shows great performance - roughly achieving O(log(n)), with a straightforward implementation.

In our performance tests, we compared OrderedMap to the alternative (SimpleMap). We measured how long it takes (in microseconds) to perform an insert and a remove operation on maps of various sizes.

With feature parity and improved performance, it fully replaces and supersedes SimpleMap. We haven’t been able to inline-replace SimpleMap, because it has a fixed and predefined layout. For that reason OrderedMap utilizes enum (a new move 2 feature), to allow for future improvements and extensions.

Big Ordered Map

BigOrderedMap is a key-value map that can grow arbitrarily large, and stores its contents across multiple (limited-in-size) resources. Its current implementation is B+ tree, which is chosen as it is best suited for the onchain storage layout - where the majority of cost comes from loading and writing to storage items, and there is no partial read/write of them.

Implementation has few characteristics that make it very versatile and useful across wide range of usecases:

  • When it has few elements, it stores all of them within the resource that contains it, providing comparable performance to OrderedMap itself, while then dynamically growing to multiple resources as more and more elements are added
  • It reduces amount of conflicts: modifications to a different part of the key-space are generally parallel, and it provides knobs for tuning between parallelism and size
  • All operations have guaranteed upper-bounds on performance (how long they take, as well as how much execution and io gas they consume), allowing for safe usage across a variety of use cases.
    - One caveat, is refundable storage fee. By default, operation that requires map to grow to more resources needs to pay for storage fee for it. Implementation here has an option to pre-pay for storage slots, and to reuse them as elements are added/removed, allowing applications to achieve fully predictable overall gas charges, if needed.
  • If key/value is within the size limits map was configured with, inserts will never fail unpredictably, as map internally understands and manages maximal resource size limits.

This is in contrast to a SmartTable, whose current implementation has various limitations:

  • Collisions can degrade performance arbitrarily
  • It doesn’t understand or manage resource limits, and so it can get into a state where a some keys cannot be inserted into it
  • Any two modification operations conflict with one another, making any workload using it fully sequential

With feature parity and improved performance and behavior, BigOrderedMap fully replaces and supersedes SmartTable.

For performance, we measured two things: how it performs at large scale, and how it performs at small scale.

At large scale

Most relevantly — we measured performance at scale, and provided comparison to the SmartTable - previous hash-based implementation of unbounded key-value map. In the test, we are adding 1M keys into each, and measuring sequential performance. we get:

This shows that BigOrderedMap is more storage-efficient, especially when keys/values are small, because SmartTable stores hash, in addition to the key and value. It is also more performant even on 1M keys, when we are optimizing for storage costs (i.e. more elements in a single bucket). As we reduce the size of the bucket, SmartTable becomes more competitive, but its storage costs increase significantly.

Note: Test is compared on the single thread to compare apples to apples, as SmartTable has no parallelism. BigOrderedMap is parallel, and running on more threads gives order of magnitude higher throughput.

At small scale

We have also measured performance at small scale, to see how much overhead is it to use BigOrderedMap, instead of just OrderedMap, when it is unknown if data will be too large to be stored in a single resource.

Here we measure microseconds taken for a single pair of insert + remove operation, into a map of varied size.

Here we can see that when map is configured to keep everything in the single resource, the overhead be less than 2 times. With configuration tuned for parallelism, it keeps the overhead overhead of individual operations reasonable (i.e. another 2–3 times). But in all cases it scales extremely well as dataset increases in size.

Alternatives

Unordered maps in general can produce better performance than ordered maps, as they have more freedom. But:

  • it is quite complex to provide hash-table based approach that has predictable performance on collisions, when operations can be defined after the fact.
  • as evaluation shows, due to efficiency of comparison (compared to hashing) and memcopy operations, we can actually achieve quite comparable performance, reducing the need for separate non-ordered versions

Usage

If you want to start using them, you can look at the usage guide, or in code documentation (ordered_map.move and big_ordered_map.move)

Native functions

We’ve added a set of new native functions, for value manipulation, that improve functionality and performance:

  • mem::swap - Swaps contents of two mutable references. This enables modifying fields of structs and enums, that don’t have copy+drop ability, without destructing and recreating the whole structs - which makes building mutable collections easier.
  • vector::move_range - Moves range of elements (efficiently) from one vector to another vector at specific position, keep the order of the rest of the elements.
  • cmp::compare - Compares two move values of the same type. This enables building generic collections and utilities - like above ordered maps, sorting or binary search utilities, priority queues, etc, which was not possible before.
  • bcs::constant_serialized_size - If the type has known constant (always the same, independent of instance) serialized size in BCS format, allows obtaining it. This allows for specialized optimizations within datastructures for such types.

Vector performance improvements

A lot of common operations on vectors (like insert, remove, append, trim), require shifting moving portion of the vector. Currently only individual elements can be exchanged withvector::swap, and so all those operations were implemented in move, with individually swapping one element at the time.

In most languages such operations are implemented through extremely efficient memcopy functions (that are specialized for the hardware they are running on). In Rust, Vec operations use ptr::copy_nonoverlapping to perform such operations efficiently. We introduce here native fun range_move<T>(from: &mut vector<T>, removal_position: u64, length: u64, to: &mut vector<T>, insert_position: u64);, which moves range of elements from one vector to another vector at specific position, keep the order of the rest of the elements. It is a single method that generalizes all 4 above operations, and more.

Performance (and correspondingly gas cost) improvement are:

Start Experimenting Today

With the introduction of Ordered Maps, Big Ordered Maps, and new native functions, the Aptos Framework continues to push performance, scalability, and developer ergonomics forward.

These new collections offer predictable performance, efficient gas usage, and flexibility for both small-scale and large-scale applications — while the supporting primitives make it easier than ever to build your own high-performance data structures. Whether you’re building DeFi protocols, indexers, or any data-intensive smart contracts, these tools give you the power to do more — with less overhead and more control.

--

--

Aptos Labs
Aptos Labs

Written by Aptos Labs

Aptos Labs is a premier Web3 studio of engineers, researchers, strategists, designers, and dreamers building on Aptos, the Layer 1 blockchain.

No responses yet