Ordering Service — The backbone of Hyperledger Fabric

Akash Gupta
Theta One Software
Published in
6 min readMay 1, 2021

In this article, I will try to describe the purpose of an Ordering Service and its techniques used in the transaction flow.

Ordering Service

Hyperledger Fabric architecture is designed based on the Deterministic Consensus Algorithm which implies that whenever any new block is distributed by the ordering node to any peer for validation of the block that is guaranteed to be final and correct. Meanwhile, different distributed blockchain platforms like Ethereum and Bitcoin support the Probabilistic Consensus Algorithm, so if an ordering node distributes a block to peer, later that has to be accepted without accepting any Policy or Endorsement of transaction. In the case of Hyperledger Fabric, the ordering services only receive the endorsed transaction proposal response from the client application. The group of ordering nodes forms an ordering service.

Orderer maintain Channel Configuration

The ordering nodes also maintain the list of organizations that are authorized to create channels and the list saved in a configuration known as the orderer system channel. The orderer determines the basic access control for channels, which satisfy the read and write access to the participants of the channel.

To change the channel configuration, the orderer processes the transaction by checking the current set of policies to make sure that the participant must have administrative rights. If the orderer confirms the update request as valid then it packages the configuration transaction in a block and sends it to all peers of the channel. The peer again validates the configuration transaction to make sure that the modification approved by the orderer and then applies the new channel configuration.

Transaction flow

There are three sets of phases in the process when a client application wants to update the ledger. Because the agreement needs to be followed among all the peers in the blockchain network.

Phase One — Transaction Proposal

The client application sends a transaction proposal to the peers, later the peers invoke the proposal into the Chanicode to produce the ledger update and then endorsed the result. Now essentially, the endorsed proposal result will not be committed into the ledger but the endorsed peers will return the endorsed proposal response to the client application. So, in the second phase, the orderer can create blocks to save the transaction proposal for further processing.

Phase Two — Transaction packaging into blocks

In this phase, the client application submits the endorsed transaction proposal response to the ordering service nodes. There will be several ordering service nodes receiving endorsed transaction proposals from many different client applications parallelly. So, all these ordering service nodes collaborate to create blocks and ordered them in a well-defined sequence. Every block will vary based on their BatchSize and BatchTimeout, so these fields are defined in channel configuration that determines how many transactions can fit-in within an individual block.

BatchSize : Controls the number of messages batched into a block

BatchTimeout: The amount of time to wait before creating a batch

In Hyperledger Fabric, the ordering service supports strict ordering which means that if the Transaction T1 is already been written within block B1 then the same transaction T1 cannot re-written into different blocks such as B2, B3. The blocks generated by an ordering service remain final and delivered into the orderer’s ledger and available to distribute to all the peers that have joined the channel.

Phase Three — Block validation and commit

In the final phase, the peers receive the blocks from the orderer and validate all block transactions to ensure that the transaction has been endorsed by the organization’s peers. If the transaction becomes invalidated, as the orderer can not remove from the blocks as written from the beginning, so the peer marks the transaction as invalid and does not update in the ledger state.

Ordering Service — Ordering transactions and blocks in Fabric Blockchain

Now let’s dive into the file which contains the properties of an Orderer — Orderer.yaml

1. General


BootstrapMethod: file
BootstrapFile: ./airline-genesis.block
BCCSP:
Default: SW
SW:
HASH: SHA2
Security: 256
FileKeyStore:
Keystore:
  1. Directory for the private crypto material needed by the orderer.
LocalMSPDir: ./cryptoconfig/ordererOrganizations/acme.com/orderers/orderer.acme.com/msp

2. Identity to register the local MSP material with the MSP

LocalMSPID: OrdererMSP

3. Listen address & Port : The IP & Port on which to bind to listen.


ListenAddress: 127.0.0.1
ListenPort: 7050

1.a Cluster

1. SendBufferSize is the maximum number of messages in the egress buffer. Consensus messages are dropped if the buffer is full, and transaction messages are waiting for space to be freed.

SendBufferSize: 10

2. ClientCertificate governs the file location of the client TLS certificate used to establish mutual TLS connections with other ordering service nodes.

ClientCertificate:

3. ClientPrivateKey governs the file location of the private key of the client TLS certificate.

ClientPrivateKey:

4. The below 4 properties should be either set together, or be unset together. If they are set, then the orderer node uses a separate listener for intra-cluster communication. If they are unset, then the general orderer listener is used. This is useful if you want to use a different TLS server certificates on the client-facing and the intra-cluster listeners.

# ListenPort defines the port on which the cluster listens to connections.
ListenPort:
# ListenAddress defines the IP on which to listen to intra-cluster communication.
ListenAddress:
# ServerCertificate defines the file location of the server TLS certificate used for intra-cluster communication.
ServerCertificate:
# ServerPrivateKey defines the file location of the private key of the TLS certificate.
ServerPrivateKey:

1.b Keepalive:

 # Disconnect the client if the time between pings is less than the specified time ServerMinInterval: 60s # Server pings the clients on open connection with the specified time between pings ServerInterval: 7200s # Server expects the clients to respond to pings. Server disconnets if response not receieved within timeout ServerTimeout: 20s

1.c TLS:

TLS settings for the GRPC server.

 Enabled: false
PrivateKey: ./server.key
Certificate: ./server.crt
RootCAs:
— ./ca.crt
ClientAuthRequired: false
ClientRootCAs:

2. File Ledger

This section applies to the configuration of the file or json ledgers.

2.a FileLedger:

# Location: The directory to store the blocks in.
Location: /home/vagrant/ledgers/orderer/multi-org/ledger
# The prefix to use when generating a ledger directory in temporary space.
Prefix: hyperledger-fabric-ordererledger

3. Debug Configuration

This controls the debugging options for the orderer

3.a Debug:

  1. BroadcastTraceDir when set will cause each request to the Broadcast service. For this orderer to be written to a file in this directory
# BroadcastTraceDir: ./trace
BroadcastTraceDir:

2. DeliverTraceDir when set will cause each request to the Deliver service for this orderer to be written to a file in this directory

# DeliverTraceDir: ./trace
DeliverTraceDir:

4. Operations Configuration

4.a Operations:

# Host and Port for the operations server
ListenAddress: 127.0.0.1:8443
# TLS configuration for the operations endpoint
TLS:
# TLS enabled
Enabled: false
# Certificate is the location of the PEM encoded TLS certificate
Certificate:
# PrivateKey points to the location of the PEM-encoded key
PrivateKey:
# Require client certificate authentication to access all resources
ClientAuthRequired: false
# Paths to PEM encoded ca certificates to trust for client authentication
RootCAs: []

5. Metrics Configuration

This configures metrics collection for the orderer

5.a Metrics:

# The metrics provider is one of statsd, prometheus, or disabled
Provider: disabled
# The statsd configuration
Statsd:
# network type: tcp or udp
Network: udp
# The statsd server address
Address: 127.0.0.1:8125
# The interval at which locally cached counters and gauges are pushed to statsd; timings are pushed immediately
WriteInterval: 30s
# The prefix is prepended to all emitted statsd metrics
Prefix:

5.b Consensus: ( orderer type = etcdraft)

1. The allowed key-value pairs here depend on consensus plugin. For etcd/raft, we use following options:

WALDir specifies the location at which Write Ahead Logs for etcd/raft are stored. Each channel will have its own subdir named after channel ID.

 WALDir: /var/hyperledger/production/orderer/etcdraft/wal

SnapDir specifies the location at which snapshots for etcd/raft are stored. Each channel will have its own subdir named after channel ID.

SnapDir: /var/hyperledger/production/orderer/etcdraft/snapshot

--

--