Internet Computer performance

From Internet Computer Wiki
Jump to: navigation, search

While having the security of Web3 blockchains, the performance of the Internet Computer (IC) is comparable to Web2 and cloud technology stacks. The IC far outperforms traditional blockchain protocols in efficiency.

Performance goals

A key objective of the Internet Computer is to provide a public compute layer that replaces traditional IT. A natural concern is that this will cause far less efficient computation.

The Internet Computer works very differently to other blockchains, and is powered by advanced new cryptography. Internally, the network is able to strictly limit the replication of data and computation, while still providing the liveness and security guarantees expected of a blockchain. It also has the ability to assign different “trust levels” to units of blockchain code that it hosts (“smart contracts”), which changes the level of replication applied to their computations and data. In its current state of development, it is already orders of magnitude more efficient than other blockchains, but it is designed to eventually become more efficient that traditional IT too.

Like all blockchains, the Internet Computer network directly applies replication, in combination with advanced cryptography, to create a tamperproof platform with better liveness guarantees than traditional IT. Yet, it also limits replication, while using the replication that occurs to drive efficiency, for example by scaling out “query” transactions.

For example, a large online service might be built on Amazon Web Services using a database in a master-slave configuration, Kubernetes instances of web workers, memcached instances for caching the results of database queries, and a CDN (content distribution network) that caches web content they serve on the edge of the network. This already creates a large amount of replication without creating a tamperproof platform, nor providing liveness guarantees. For example, each slave node of the database replicates its computations and data, and regular snapshots will also be taken as backups, data used by the web workers is replicated by the memcached instances, and each work will also cache data in its memory, while the product of web queries will be replicated all over the world on CDN nodes.

Because replication is at the core of the design of the Internet Computer, it can derive powerful security, liveness and other properties from replication, while also applying it more efficiently. For example, because the Internet Computer is a single logical blockchain and platform, as it grows larger, the utilization of the underlying node hardware upon which it runs can be made higher than, say, a standalone server machine in a data center. A key objective of the Internet Computer is, over time, to provide a public compute platform that provides a more power efficient way for the world to build systems and services.

Performance experiments

Scalability of the Internet Computer is facilitated by sharding the IC into subnet blockchains. Every subnet blockchain can process update calls (writes) from ingress messages independently from other subnets. The IC can scale up by adding more subnets at the cost of having more network traffic (as applications potentially need to communicate across subnets). In its current form, the IC should be able to scale out to hundreds of subnets.

Query calls (reads) can be processed locally by nodes in a subnet. The response to a query call can therefore have low latency since the query just needs a response by a single node and does not require inter-node communication or agreement. The more nodes a subnet has, the more query calls it can handle; and the more nodes the IC has, the more query calls it can handle.

Test setup

The experiments were run concurrently against all subnets other than the NNS and some of the most utilized application subnets to avoid disturbance of active IC users. The IC has a set of boundary nodes that route calls to the core nodes that host the subnets. The experiments sent loads against the subnets directly and are did not route traffic through the boundary nodes. Boundary nodes have additional rate limiting, which is currently set slightly more conservative compared to what the IC can handle and running against the boundary nodes would therefore be unsuitable for performance evaluation. The experiment targeted all nodes in every subnet concurrently, much the same as what boundary nodes would be doing if they would be used.

The experiment consisted of installing one counter canister in every subnet. This counter canister is essentially a no-op canister. It only maintains a counter, which can be queried via query calls and incremented via update calls. The counter value is not using orthogonal persistence, so the overhead for the execution layer of the IC is minimal. Stressing the counter canister can be seen as a way to determine the system overhead or baseline performance.

Measurements

We evaluate the performance of the IC on a CD pipeline, which is running periodically. Those benchmarks target a single subnetwork with a configuration close to IC nodes on mainnet. We scale up those numbers to the current number of nodes and subnetworks on mainnet, which yields the following numbers:

Query calls: 3,196,225 queries/s (7,025 queries/s per node scaled up to 455 nodes in application subnetworks)

Update calls: 33,749 updates/s (1,023 updates/s per subnetwork scaled up to 33 application subnetworks)

Above calculation is based on measurements from: 2023-11-22.

All benchmark run against a small number of canister that simply return, as the goal of this benchmark is to measure throughput of the messaging subsystem and to determine runtime overhead of message processing.

Canister code can be (almost) arbitrarily complex and therefor significantly lower the throughput if canister execution is becoming the bottleneck (and not messaging).

Previous measurements

The following measurements were made on May 24, 2022, with 31 application subnets (having each 13 nodes) out of a total of 35 subnets (4 are system subnets such as the NNS and SNS subnets that have more nodes). Benchmarks where executed by simultaneously stressing all subnetworks on mainnet.

Update calls

The Internet Computer sustained more than 20'841 updates/second calls to application canisters for a period of four minutes (averaging 672 updates/second per subnet). The update calls measured here are triggered from ingress messages sent from outside the IC.

Query calls

Arguably more important are query calls, since they contribute to more than 90% of the traffic observed on the IC. The Internet Computer processed 1'125'982 queries per second calls to application canisters (averaging 2'792 queries per second per node). During the experiment each load is increased incrementally and run for a period of 5 minutes.


Conclusion and next steps

The Internet Computer today already shows impressive performance. On top of that, it should be possible to further scale out the IC using:

  • More subnets: This will immediately increase the query and update call throughput. While adding subnets might eventually lead to other scalability problems, the IC in its current shape should be able to support hundreds of subnets.
  • Performance improvements: Performance can also be improved by better single machine, network and consensus performance tuning. Increasing the performance by at least an order of magnitude is plausible.

See Also

References