Development Update — September 6, 2018

Hello Lisk Community,
With the incredibly eventful network migration week behind us, we’re continuing to swiftly move forward with all of the latest versions of our products. If you somehow missed our major blog post from last week describing them all in detail, be sure to check it out. We would also like to announce that going forward, we will include all of the products of the Lisk ecosystem in the development update. Therefore, these development updates will no longer be weekly, but biweekly. This will be the last weekly development update — the next development update will come out on September 20. We really look forward to giving you a much more thorough analysis of Lisk’s progress going forward!
Lisk Core 1.0.2 Patch Release
Issue #2370: On Saturday, September 1, there was a network incident on our Testnet. During his tests, community member Andrea sent a transaction with data field (an optional message that you can include in your transaction) set to \u0000hey:). His tests weren’t targeted to find a bug, as \u0000 is a valid Unicode character NULL (U+0000). However, in the database, we store this data field as type BYTEA and were retrieving it in SQL queries through CONVERT_FROM. Unfortunately, PostgreSQL doesn’t support converting NULL bytes to UTF-8 and therefore returns an error. This resulted in nodes crashing when database queries involving this transaction were executed. The Mainnet network wasn’t affected, but it was our top priority to get this fixed as soon as possible. Several members of our dedicated development team — François, Manu, Mariusz, and Shu — went into the office to work on the fix while other team members supported them remotely. Given that the issue was identified fast, the solution was also implemented and tested very quickly. After just a few hours, we released the patch.
Lisk Core 1.1.0
Following the release of Lisk Core 1.0.1, we identified several issues regarding the performance of API endpoints. Instead of fixing them in a patch release, we decided to reopen and make the fixes in version 1.1.0.
Issue #2348: In the /api/node/status endpoint, we are returning the total count of confirmed transactions. Under the hood, the count was retrieved using a SQL query that performed COUNT on the transactions (trs) table. This operation is heavy to execute, as this table contains millions of records. We fixed the performance by caching the count.
Issue #2352: We found the performance of another API endpoint, the /api/delegates/<address>/forging_statistics, as unsatisfactory. The main reason for introducing this endpoint was to allow one to retrieve forging statistics for a period of time, such as one month or one week, etc. This endpoint is not reliable for getting total forging statistics (from all time), as the underlying query is performing aggregate function SUM on two columns over a large data set (~6M records and growing). We fixed the performance by using data from the mem_accounts table when statistics from all time are requested (thus no time filter is provided).
Issue #2351: We were getting the performance hit in the /api/delegates API endpoint because one of the fields, rank, was a dynamic field. To get this field, we had to execute a subquery for every delegate that is registered, therefore for mainnet ~1700 times. We fixed the performance by storing rank as a normal column in the database and continue to update it every time a round is changed (every 101 blocks).
Issue #2350: We had a similar case with the /api/accounts endpoint. Two subqueries were being executed for every row touched by the main SQL query. For example, when providing limit 100 and offset 1000, those subqueries were executed 1100 times. However, data retrieved from those subqueries wasn’t used for generating the API response. The fix was to just require fields that we need for the API response so those subqueries are not executed anymore.
Issue #2330: Opened by our community member Corsaro, this issue was previously assigned to Version 1.3.0. We decided to include it in 1.1.0 as it affects delegates running pools (sharing rewards with their voters). The issue was regarding an incorrect results order when using sorting in the /api/voters API endpoint. We fixed this by adding proper logic to the underlying SQL query that is responsible for getting the results from the database.
Next steps
Because the Lisk Core 1.1.0 version was extended by additional issues, we need to perform another QA round to ensure the highest quality possible for this release. We’ve already begun and expect to finish by next week.
Over the last week, we were also focused on Version 1.3.0, which will be one of our next releases. We’re progressing very well — we have already solved several issues and have pull requests opened for another four.
Thank you again to our all-star community. We look forward to the Reddit AMA on September 18 and to updating you more comprehensively in the next Biweekly Development Update.
-The Lisk Team

