Difference between revisions of "Internet Computer performance"

From Internet Computer Wiki
Jump to: navigation, search
m
Line 8: Line 8:
  
 
We will periodically update the numbers in this article to reflect improvements we will achieve over time.
 
We will periodically update the numbers in this article to reflect improvements we will achieve over time.
 +
 +
== Test setup ==
 +
 +
We are running all of our experiments concurrently against all subnetworks other than the NNS and some of the most utilized application subnetworks to avoid disturbance of active IC users.
 +
We send load against those subnetworks directly and are not using boundary nodes for those experiments. Boundary nodes have additional rate limiting which is currently set slightly more conservative compared to what the IC can handle and running against them therefore is unsuitable for performance evaluation.
 +
We are targeting all nodes in each subnetwork concurrently, much the same as what boundary nodes would be doing if we would use them.
 +
 +
We have installed one counter canister in each subnetwork. This counter canister is essentially a no-op canister. It only maintains a counter, which can be queries via a query call and incremented via update call. The counter value is not using orthogonal persistence, so the overhead for the execution layer of the IC is minimal. Stressing the counter canister can be seen as a way to determine the system overhead or baseline performance.
  
  

Revision as of 09:24, 10 November 2021

This post describes our performance evaluation of the Internet computer.

Scalability of the Internet Computer comes from partitioning the IC into subnetworks.

Subnetworks process update calls from Ingress messages independently from other subnetworks. They can scale up by adding more subnetworks, which is at the cost of having more network traffic (as applications then need to potentially communicate across a network). In its current form, the IC should be able to scale out to hundreds of subnetworks.

Query calls are read-only calls that are processed locally on each node. Scalability comes from adding more nodes, either to an existing subnetwork (at the cost of making consensus i.e. update calls more expensive) or as new subnetworks.

We will periodically update the numbers in this article to reflect improvements we will achieve over time.

Test setup

We are running all of our experiments concurrently against all subnetworks other than the NNS and some of the most utilized application subnetworks to avoid disturbance of active IC users. We send load against those subnetworks directly and are not using boundary nodes for those experiments. Boundary nodes have additional rate limiting which is currently set slightly more conservative compared to what the IC can handle and running against them therefore is unsuitable for performance evaluation. We are targeting all nodes in each subnetwork concurrently, much the same as what boundary nodes would be doing if we would use them.

We have installed one counter canister in each subnetwork. This counter canister is essentially a no-op canister. It only maintains a counter, which can be queries via a query call and incremented via update call. The counter value is not using orthogonal persistence, so the overhead for the execution layer of the IC is minimal. Stressing the counter canister can be seen as a way to determine the system overhead or baseline performance.


Measurements

Update calls

The Internet Computer can currently sustain more than 11000 updates/second for a period of four minutes, with peaks over 11500 updates/second.

Update Call Performance

Query calls

Arguably more important are query calls, since they contribute with more than 90% of the traffic we are observing on the IC.

Query Call Performance

The Internet Computer can currently process up to 250,000 queries per second. We are working on several improvements to those numbers that should improve the query performance by at least one order of magnitude. During our experiments, we increment the load incrementally and run each load for a period of 5 minutes.