Over the New Year, a test Saito node we call Marathon created its millionth block. In doing so over 10 million transactions were validated and bundled into blocks at a rate of about 1 block every 5 seconds with an average of 2 transactions per second.
|Genesis Period||100 Blocks|
|Spam Volume||10 x 0.01MB tx/block|
|Block time |
This is not a heavy load for Saito – 2 TPS is light traffic. The purpose of the experiment was to check for memory leaks, or any other software issues that could compound over time and eventually result in node failure or system collapse..
From this perspective the test has been a great success. The node has run in a steady state from an hour after initiation for two months. The server running the software is also an extremely lightweight machine, with a total memory capacity of only 800 MB of RAM (580 MB of which dedicated to the Saito process). The load on the machine has also remained near zero. Amusingly, the monitoring tools we are running place almost as much memory and processor burden on the server as the Saito instance.
|Data Stored||2.5 MB|
|Data Processed||12.6 GB|
This is just one small test amid many we have ongoing, but it is reassuring to know that Saito can run so stably and error-free for such an extended period. This test also demonstrates that transaction rebroadcasting on a transient chain works at keeping the chain compact while permitting long-term data storage for paying transactions. The maximum amount of data ever stored on the server across the entire test was 2.5 MB of data out of nearly 12 GB processed in total.
We plan on leaving Marathon running and seeing how it is faring at the two million block mark. Onwards and upwards.