Event Service Framework V2.0
TRON features an event service mechanism that enables developers to receive and process on-chain events via custom event plugins. The overall architecture of the event service is as follows: The event service retrieves and encapsulates on-chain event information, then writes this data to an event cache queue for asynchronous consumption by the event plugin. Upon receiving the event data, the plugin can write the data to databases, message queues, or other target systems based on business requirements to further support upper-layer applications.
In the event service framework V1.0 (hereafter the V1.0 framework or V1.0), the encapsulation of event data and the operation of writing data to the queue were highly coupled with the block execution process:
- Transaction Execution Phase: Encapsulates contract event data (
contractEventTrigger
) and contract log event data (contractLogTrigger
); - Block Execution Phase: Encapsulates block event data (
blockTrigger
) and transaction event data (transactionTrigger
).
Following encapsulation, the event data was immediately written to the event queue for subsequent processing by the event plugin. The overall process flow is illustrated in the following diagram:
Due to the coupling of the event processing logic with the block processing flow, any event service anomaly could cause block processing failures, thereby impacting block broadcasting and synchronization. Additionally, V1.0 did not support historical event replay and lacked an event push rate control mechanism, making it inadequate for complex application scenarios.
To address these issues, TRON introduced the event service framework V2.0 (hereafter the V2.0 framework or V2.0) in the GreatVoyage-v4.8.0 (Kant) release. This framework decouples the event processing logic from the block execution flow, restructuring it as an independent module, which significantly enhances system stability and scalability. Concurrently, V2.0, while preserving the independence of the block processing flow, supports historical event replay and incorporates an event push rate control mechanism, effectively improving the applicability and robustness of the event service across various use cases.
Event Service Framework V2.0 Overview
This document will provide a detailed explanation of the V2.0 framework, covering the following key aspects:
- Independent Event Service Module
- Historical Event Replay Feature
- Consumption Rate Awareness Mechanism
Independent Event Service Module
In V2.0, the event service is designed as an independent module, achieving complete decoupling from the core block processing logic. This module neither depends on other business modules nor can it be directly invoked by them.
Upon startup, the event service reads block data from the database, then encapsulates the required event types based on user-configured event subscription options. The encapsulated event data is then placed into the event queue for subsequent processing by the event plugin.
This architecture significantly enhances system flexibility, stability, and scalability, especially effective for complex or high-concurrency application scenarios.
Core Data Structure
BlockEvent
: Represents the complete set of event data for a block. It includes:
- Block Event Data (
blockTrigger
) - Transaction Event Data (
transactionTrigger
) - Contract Event Data (
contractEventTrigger
) - Contract Log Event Data (
contractLogTrigger
)
In contrast to V1.0, which processed event data separately, V2.0 completes the unified encapsulation of all event data in a single load operation.
Core Interface
BlockEvent getBlockEvent(long blockNum)
: Reads data related to the specified block height from the database, encapsulates the event data, and returns a BlockEvent
object.
Core Threads
BlockEventLoad
(Data Loading Thread): Reads block-related data from the database and encapsulates event data. The encapsulated data is then passed to the following threads for processing:
RealtimeEventService
(Real-time Event Processing Thread): Processes event data for new blocks.SolidEventService
(Solidified Block Event Processing Thread): Processes event data for newly solidified blocks.
The following diagram illustrates the thread collaboration:
Historical Event Replay Feature
V2.0 introduces the historical event replay feature, addressing V1.0’s limitation of only supporting real-time event pushing.
In V1.0, events were only pushed to subscribers in real time when new blocks were processed, with no support for replaying events from historical blocks.
V2.0 now supports processing and pushing events from local historical blocks, meeting user demand for historical data subscriptions. This feature can be configured via the following option:
event.subscribe.startSyncBlockNum = <starting block height>
startSyncBlockNum <= 0
indicates that the historical event synchronization feature is turned off.startSyncBlockNum > 0
indicates that this feature is turned on, and historical events will be synchronized starting from the specified block height. (Note: Enabling this feature is recommended in conjunction with the consumption rate control mechanism to avoid abnormal node resource utilization due to event backlog.)
Caution: Always ensure that the startSyncBlockNum
parameter is configured correctly before restarting the node. A correctly configured node will synchronize historical events from the specified block height upon startup. Incorrect configuration can lead to duplicate or missed event pushes, thus affecting the correctness of the business logic.
Consumption Rate Awareness Mechanism
In the V1.0 framework, event data, once encapsulated, was directly pushed to the event queue for asynchronous consumption by the plugin. However, when a plugin’s consumption capacity was insufficient, event data could continuously accumulate in the memory, eventually leading to memory overflow problems.
To address this problem, V2.0 introduces a consumption rate awareness mechanism by adding a new plugin interface getPendingSize
to query the number of pending events in the current event queue. Before loading new event data, the event service will call the getPendingSize
interface to check the plugin’s current consumption status:
Return value > 50000
indicates that the plugin is in a busy state, and the event service will pause loading new event data to prevent continuous memory accumulation.Return value <= 50000
indicates that the plugin has sufficient consumption capacity, and the event service will continue to push data.
This mechanism enables dynamic regulation between event loading and the plugin consumption capacity, effectively enhancing the system’s stability and robustness in high-concurrency processing and historical data synchronization scenarios.
Note: If you choose to use the V2.0 framework, we strongly recommend upgrading the event plugins to their latest versions (v2.1.0) to ensure compatibility with consumption rate control.
Version Notes
To ensure application developers have sufficient time for a smooth transition to the new version, the original V1.0 framework is still retained in Kant. We recommend that you gradually migrate to the V2.0 framework based on your specific needs.
The V1.0 framework will be completely removed in a future release when deemed appropriate, with only V2.0 retained. We strongly recommend that application developers plan for compatibility adaptation and version switching in advance.
The event service will default to V1.0 after the Kant version deployment. To enable V2.0, the switch can be made via the following configuration option:
event.subscribe.version = 1 // 1 means v2.0 , 0 means v1.0
Compatibility
The specific differences between the Event Service Framework V1.0 and V2.0 are as follows:
How to Migrate to the V2.0 Framework?
Key Considerations Before Migration
- Application Dependency on Internal Transactions Subscription Feature
The V2.0 framework does not support internal transaction log subscription. TheinternalTransactionList
field in emittedTransactionLogTrigger
events will be empty. Therefore, migration is not recommended at this time for applications that rely on internal transaction information. Please continue using the V1.0 framework until this feature is supported in a later version. - Plugin Version Compatibility
To support the consumption rate awareness mechanism of V2.0, we strongly recommend upgrading to and using the event plugins’ latest versions (v2.1.0). This is especially important when synchronizing events from a specified block height, due to the potential for a large volume of event data. Insufficient plugin consumption capacity can lead to continuous memory growth or even memory leaks in the node.
Steps for Migration
1. Generating the New Event Plugin
Clone the event-plugin
project from the GitHub repository and switch to the dedicated branch for the new plugin version. Then, execute the build command to generate the .zip file for the new plugin.
git clone git@github.com:tronprotocol/event-plugin.git
cd event-plugin
git checkout feature/new_event_service
./gradlew build
2. Enabling the V2.0 Framework via Configuration
In the FullNode
configuration file, add the following configuration to enable the V2.0 framework:
event.subscribe.version = 1
3. Configuring Event Subscription
The V2.0 framework’s subscription configuration method remains consistent with V1.0; no additional modifications are required. Please refer to the event subscription configuration documentation for detailed configuration instructions.
4. (Optional) Synchronizing Historical Block Events
The V2.0 framework supports historical event synchronization starting from a specified block height. You can set the starting synchronization height using the following configuration:
event.subscribe.startSyncBlockNum = <block_height>
5. Starting the Node and Plugin
Upon completing the aforementioned configurations, start the FullNode and the corresponding event plugin to finalize the migration to the V2.0 framework. The node startup command is as follows:
java -jar FullNode.jar -c config.conf --es